326 research outputs found
Smart Sensing Technologies for Personalised e-Coaching
People living in both developed and developing countries face serious health challenges
related to sedentary lifestyles. It is therefore essential to find new ways to improve health so that
people can live longer and age well. With an ever-growing number of smart sensing systems
developed and deployed across the globe, experts are primed to help coach people to have healthier
behaviors. The increasing accountability associated with app- and device-based behavior tracking
not only provides timely and personalized information and support, but also gives us an incentive
to set goals and do more. This paper outlines some of the recent efforts made towards automatic
and autonomous identification and coaching of troublesome behaviors to procure lasting, beneficial
behavioral changes.The authors also want to acknowledge
received funding from the European Union’s Horizon 2020 research and innovation programme under Grant
Agreement #76955
Smart Sensing Technologies for Personalised Coaching
People living in both developed and developing countries face serious health challenges related to sedentary lifestyles. It is therefore essential to find new ways to improve health so that people can live longer and can age well. With an ever-growing number of smart sensing systems developed and deployed across the globe, experts are primed to help coach people toward healthier behaviors. The increasing accountability associated with app- and device-based behavior tracking not only provides timely and personalized information and support but also gives us an incentive to set goals and to do more. This book presents some of the recent efforts made towards automatic and autonomous identification and coaching of troublesome behaviors to procure lasting, beneficial behavioral changes
Physiological and behavior monitoring systems for smart healthcare environments: a review
Healthcare optimization has become increasingly important in the current era, where numerous challenges are posed by population ageing phenomena and the demand for higher quality of the healthcare services. The implementation of Internet of Things (IoT) in the healthcare ecosystem has been one of the best solutions to address these challenges and therefore to prevent and diagnose possible health impairments in people. The remote monitoring of environmental parameters and how they can cause or mediate any disease, and the monitoring of human daily activities and physiological parameters are among the vast applications of IoT in healthcare, which has brought extensive attention of academia and industry. Assisted and smart tailored environments are possible with the implementation of such technologies that bring personal healthcare to any individual, while living in their preferred environments. In this paper we address several requirements for the development of such environments, namely the deployment of physiological signs monitoring systems, daily activity recognition techniques, as well as indoor air quality monitoring solutions. The machine learning methods that are most used in the literature for activity recognition and body motion analysis are also referred. Furthermore, the importance of physical and cognitive training of the elderly population through the implementation of exergames and immersive environments is also addressedinfo:eu-repo/semantics/publishedVersio
Developing an Autonomous Mobile Robotic Device for Monitoring and Assisting Older People
A progressive increase of the elderly population in the world has required technological solutions capable of improving the life prospects of people suffering from senile dementias such as Alzheimer's. Socially Assistive Robotics (SAR) in the research field of elderly care is a solution that can ensure, through observation and monitoring of behaviors, their safety and improve their physical and cognitive health. A social robot can autonomously and tirelessly monitor a person daily by providing assistive tasks such as remembering to take medication and suggesting activities to keep the assisted active both physically and cognitively. However, many projects in this area have not considered the preferences, needs, personality, and cognitive profiles of older people. Moreover, other projects have developed specific robotic applications making it difficult to reuse and adapt them on other hardware devices and for other different functional contexts. This thesis presents the development of a scalable, modular, multi-tenant robotic application and its testing in real-world environments. This work is part of the UPA4SAR project ``User-centered Profiling and Adaptation for Socially Assistive Robotics''. The UPA4SAR project aimed to develop a low-cost robotic application for faster deployment among the elderly population. The architecture of the proposed robotic system is modular, robust, and scalable due to the development of functionality in microservices with event-based communication. To improve robot acceptance the functionalities, enjoyed through microservices, adapt the robot's behaviors based on the preferences and personality of the assisted person. A key part of the assistance is the monitoring of activities that are recognized through deep neural network models proposed in this work. The final experimentation of the project carried out in the homes of elderly volunteers was performed with complete autonomy of the robotic system. Daily care plans customized to the person's needs and preferences were executed. These included notification tasks to remember when to take medication, tasks to check if basic nutrition activities were accomplished, entertainment and companionship tasks with games, videos, music for cognitive and physical stimulation of the patient
A knowledge-based approach towards human activity recognition in smart environments
For many years it is known that the population of older persons is on the rise. A recent report estimates that globally, the share of the population aged 65 years or over is expected to increase from 9.3 percent in 2020 to around 16.0 percent in 2050 [1]. This point has been one of the main sources of motivation for active research in the domain of human
activity recognition in smart-homes. The ability to perform ADL without assistance from
other people can be considered as a reference for the estimation of the independent living
level of the older person. Conventionally, this has been assessed by health-care domain
experts via a qualitative evaluation of the ADL. Since this evaluation is qualitative, it can
vary based on the person being monitored and the caregiver\u2019s experience. A significant
amount of research work is implicitly or explicitly aimed at augmenting the health-care
domain expert\u2019s qualitative evaluation with quantitative data or knowledge obtained from
HAR. From a medical perspective, there is a lack of evidence about the technology readiness
level of smart home architectures supporting older persons by recognizing ADL [2]. We
hypothesize that this may be due to a lack of effective collaboration between smart-home
researchers/developers and health-care domain experts, especially when considering HAR.
We foresee an increase in HAR systems being developed in close collaboration with caregivers
and geriatricians to support their qualitative evaluation of ADL with explainable quantitative
outcomes of the HAR systems. This has been a motivation for the work in this thesis. The
recognition of human activities \u2013 in particular ADL \u2013 may not only be limited to support
the health and well-being of older people. It can be relevant to home users in general. For
instance, HAR could support digital assistants or companion robots to provide contextually
relevant and proactive support to the home users, whether young adults or old. This has also
been a motivation for the work in this thesis.
Given our motivations, namely, (i) facilitation of iterative development and ease in collaboration between HAR system researchers/developers and health-care domain experts in ADL,
and (ii) robust HAR that can support digital assistants or companion robots. There is a need
for the development of a HAR framework that at its core is modular and flexible to facilitate
an iterative development process [3], which is an integral part of collaborative work that involves develop-test-improve phases. At the same time, the framework should be intelligible
for the sake of enriched collaboration with health-care domain experts. Furthermore, it
should be scalable, online, and accurate for having robust HAR, which can enable many
smart-home applications. The goal of this thesis is to design and evaluate such a framework.
This thesis contributes to the domain of HAR in smart-homes. Particularly the contribution can be divided into three parts. The first contribution is Arianna+, a framework to develop
networks of ontologies - for knowledge representation and reasoning - that enables smart
homes to perform human activity recognition online. The second contribution is OWLOOP,
an API that supports the development of HAR system architectures based on Arianna+. It
enables the usage of Ontology Web Language (OWL) by the means of Object-Oriented
Programming (OOP). The third contribution is the evaluation and exploitation of Arianna+
using OWLOOP API. The exploitation of Arianna+ using OWLOOP API has resulted in four
HAR system implementations. The evaluations and results of these HAR systems emphasize
the novelty of Arianna+
SHELDON Smart habitat for the elderly.
An insightful document concerning active and assisted living under different perspectives: Furniture and habitat, ICT solutions and Healthcare
Recommended from our members
A survey on wearable sensor modality centred human activity recognition in health care
Increased life expectancy coupled with declining birth rates is leading to an aging population structure. Aging-caused changes, such as physical or cognitive decline, could affect people's quality of life, result in injuries, mental health or the lack of physical activity. Sensor-based human activity recognition (HAR) is one of the most promising assistive technologies to support older people's daily life, which has enabled enormous potential in human-centred applications. Recent surveys in HAR either only focus on the deep learning approaches or one specific sensor modality. This survey aims to provide a more comprehensive introduction for newcomers and researchers to HAR. We first introduce the state-of-art sensor modalities in HAR. We look more into the techniques involved in each step of wearable sensor modality centred HAR in terms of sensors, activities, data pre-processing, feature learning and classification, including both conventional approaches and deep learning methods. In the feature learning section, we focus on both hand-crafted features and automatically learned features using deep networks. We also present the ambient-sensor-based HAR, including camera-based systems, and the systems which combine the wearable and ambient sensors. Finally, we identify the corresponding challenges in HAR to pose research problems for further improvement in HAR
Multi-modal on-body sensing of human activities
Increased usage and integration of state-of-the-art information technology in our everyday work life aims at increasing the working efficiency. Due to unhandy human-computer-interaction methods this progress does not always result in increased efficiency, for mobile workers in particular. Activity recognition based contextual computing attempts to balance this interaction deficiency. This work investigates wearable, on-body sensing techniques on their applicability in the field of human activity recognition. More precisely we are interested in the spotting and recognition of so-called manipulative hand gestures. In particular the thesis focuses on the question whether the widely used motion sensing based approach can be enhanced through additional information sources. The set of gestures a person usually performs on a specific place is limited -- in the contemplated production and maintenance scenarios in particular. As a consequence this thesis investigates whether the knowledge about the user's hand location provides essential hints for the activity recognition process. In addition, manipulative hand gestures -- due to their object manipulating character -- typically start in the moment the user's hand reaches a specific place, e.g. a specific part of a machinery. And the gestures most likely stop in the moment the hand leaves the position again. Hence this thesis investigates whether hand location can help solving the spotting problem. Moreover, as user-independence is still a major challenge in activity recognition, this thesis investigates location context as a possible key component in a user-independent recognition system. We test a Kalman filter based method to blend absolute position readings with orientation readings based on inertial measurements. A filter structure is suggested which allows up-sampling of slow absolute position readings, and thus introduces higher dynamics to the position estimations. In such a way the position measurement series is made aware of wrist motions in addition to the wrist position. We suggest location based gesture spotting and recognition approaches. Various methods to model the location classes used in the spotting and recognition stages as well as different location distance measures are suggested and evaluated. In addition a rather novel sensing approach in the field of human activity recognition is studied. This aims at compensating drawbacks of the mere motion sensing based approach. To this end we develop a wearable hardware architecture for lower arm muscular activity measurements. The sensing hardware based on force sensing resistors is designed to have a high dynamic range. In contrast to preliminary attempts the proposed new design makes hardware calibration unnecessary. Finally we suggest a modular and multi-modal recognition system; modular with respect to sensors, algorithms, and gesture classes. This means that adding or removing a sensor modality or an additional algorithm has little impact on the rest of the recognition system. Sensors and algorithms used for spotting and recognition can be selected and fine-tuned separately for each single activity. New activities can be added without impact on the recognition rates of the other activities
Artificial Intelligence and Ambient Intelligence
This book includes a series of scientific papers published in the Special Issue on Artificial Intelligence and Ambient Intelligence at the journal Electronics MDPI. The book starts with an opinion paper on “Relations between Electronics, Artificial Intelligence and Information Society through Information Society Rules”, presenting relations between information society, electronics and artificial intelligence mainly through twenty-four IS laws. After that, the book continues with a series of technical papers that present applications of Artificial Intelligence and Ambient Intelligence in a variety of fields including affective computing, privacy and security in smart environments, and robotics. More specifically, the first part presents usage of Artificial Intelligence (AI) methods in combination with wearable devices (e.g., smartphones and wristbands) for recognizing human psychological states (e.g., emotions and cognitive load). The second part presents usage of AI methods in combination with laser sensors or Wi-Fi signals for improving security in smart buildings by identifying and counting the number of visitors. The last part presents usage of AI methods in robotics for improving robots’ ability for object gripping manipulation and perception. The language of the book is rather technical, thus the intended audience are scientists and researchers who have at least some basic knowledge in computer science
Location-enhanced activity recognition in indoor environments using off the shelf smart watch technology and BLE beacons
Activity recognition in indoor spaces benefits context awareness and improves the efficiency of applications related to personalised health monitoring, building energy management, security and safety. The majority of activity recognition frameworks, however, employ a network of specialised building sensors or a network of body-worn sensors. As this approach suffers with respect to practicality, we propose the use of commercial off-the-shelf devices. In this work, we design and evaluate an activity recognition system composed of a smart watch, which is enhanced with location information coming from Bluetooth Low Energy (BLE) beacons. We evaluate the performance of this approach for a variety of activities performed in an indoor laboratory environment, using four supervised machine learning algorithms. Our experimental results indicate that our location-enhanced activity recognition system is able to reach a classification accuracy ranging from 92% to 100%, while without location information classification accuracy it can drop to as low as 50% in some cases, depending on the window size chosen for data segmentation
- …