148 research outputs found

    Multi-scale Entropy and Multiclass Fisher’s Linear Discriminant for Emotion Recognition Based on Multimodal Signal

    Get PDF
    Emotion recognition using physiological signals has been a special topic frequently discussed by researchers and practitioners in the past decade. However, the use of SpO2 and Pulse rate signals for emotion recognitionisvery limited and the results still showed low accuracy. It is due to the low complexity of SpO2 and Pulse rate signals characteristics. Therefore, this study proposes a Multiscale Entropy and Multiclass Fisher’s Linear Discriminant Analysis for feature extraction and dimensional reduction of these physiological signals for improving emotion recognition accuracy in elders.  In this study, the dimensional reduction process was grouped into three experimental schemes, namely a dimensional reduction using only SpO2 signals, pulse rate signals, and multimodal signals (a combination feature vectors of SpO2 and Pulse rate signals). The three schemes were then classified into three emotion classes (happy, sad, and angry emotions) using Support Vector Machine and Linear Discriminant Analysis Methods. The results showed that Support Vector Machine with the third scheme achieved optimal performance with an accuracy score of 95.24%. This result showed a significant increase of more than 22%from the previous works

    Exploring the Landscape of Ubiquitous In-home Health Monitoring: A Comprehensive Survey

    Full text link
    Ubiquitous in-home health monitoring systems have become popular in recent years due to the rise of digital health technologies and the growing demand for remote health monitoring. These systems enable individuals to increase their independence by allowing them to monitor their health from the home and by allowing more control over their well-being. In this study, we perform a comprehensive survey on this topic by reviewing a large number of literature in the area. We investigate these systems from various aspects, namely sensing technologies, communication technologies, intelligent and computing systems, and application areas. Specifically, we provide an overview of in-home health monitoring systems and identify their main components. We then present each component and discuss its role within in-home health monitoring systems. In addition, we provide an overview of the practical use of ubiquitous technologies in the home for health monitoring. Finally, we identify the main challenges and limitations based on the existing literature and provide eight recommendations for potential future research directions toward the development of in-home health monitoring systems. We conclude that despite extensive research on various components needed for the development of effective in-home health monitoring systems, the development of effective in-home health monitoring systems still requires further investigation.Comment: 35 pages, 5 figure

    Improving Quality of Life: Home Care for Chronically Ill and Elderly People

    Get PDF
    In this chapter, we propose a system especially created for elderly or chronically ill people that are with special needs and poor familiarity with technology. The system combines home monitoring of physiological and emotional states through a set of wearable sensors, user-controlled (automated) home devices, and a central control for integration of the data, in order to provide a safe and friendly environment according to the limited capabilities of the users. The main objective is to create the easy, low-cost automation of a room or house to provide a friendly environment that enhances the psychological condition of immobilized users. In addition, the complete interaction of the components provides an overview of the physical and emotional state of the user, building a behavior pattern that can be supervised by the care giving staff. This approach allows the integration of physiological signals with the patient’s environmental and social context to obtain a complete framework of the emotional states

    MsWH: A multi-sensory hardware platform for capturing and analyzing physiological emotional signals

    Get PDF
    This paper presents a new physiological signal acquisition multi-sensory platform for emotion detection: Multi-sensor Wearable Headband (MsWH). The system is capable of recording and analyzing five different physiological signals: skin temperature, blood oxygen saturation, heart rate (and its variation), movement/position of the user (more specifically of his/her head) and electrodermal activity/bioimpedance. The measurement system is complemented by a porthole camera positioned in such a way that the viewing area remains constant. Thus, the user''s face will remain centered regardless of its position and movement, increasing the accuracy of facial expression recognition algorithms. This work specifies the technical characteristics of the developed device, paying special attention to both the hardware used (sensors, conditioning, microprocessors, connections) and the software, which is optimized for accurate and massive data acquisition. Although the information can be partially processed inside the device itself, the system is capable of sending information via Wi-Fi, with a very high data transfer rate, in case external processing is required. The most important features of the developed platform have been compared with those of a proven wearable device, namely the Empatica E4 wristband, in those measurements in which this is possible

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Emotions detection on an ambient intelligent system using wearable devices

    Get PDF
    This paper presents the Emotional Smart Wristband and its integration with the iGenda. The aim is to detect emotional states of a group of entities through the wristband and send the social emotion value to the iGenda so it may change the home environment and notify the caregivers. This project is advantageous to communities of elderly people, like retirement homes, where a harmonious environment is imperative and where the number of inhabitants keeps increasing. The iGenda provides the visual interface and the information center, receiving the information from the Emotional Smart Wristband and tries achieve a specific emotion (such as calm or excitement). Thus, the goal is to provide an affective system that directly interacts with humans by discreetly improving their lifestyle. In this paper, it is described the wristband in depth and the data models, and is provided an evaluation of them performed by real individuals and the validation of this evaluation.- This work is supported by COMPETE, Portugal: POCI-01-0145-FEDER-007043 and FCT - Fundacao para a Ciencia e Tecnologi, Portugal a within the projects UID/CEC/00319/2013 and Post-Doc scholarship SFRH/BPD/102696/2014 (Angelo Costa) This work is partially supported by the MINECO/FEDER, Spain TIN2015-65515-C4-1-R and AP2013-01276 awarded to Jaime-Andres Rincon

    Physiological and behavior monitoring systems for smart healthcare environments: a review

    Get PDF
    Healthcare optimization has become increasingly important in the current era, where numerous challenges are posed by population ageing phenomena and the demand for higher quality of the healthcare services. The implementation of Internet of Things (IoT) in the healthcare ecosystem has been one of the best solutions to address these challenges and therefore to prevent and diagnose possible health impairments in people. The remote monitoring of environmental parameters and how they can cause or mediate any disease, and the monitoring of human daily activities and physiological parameters are among the vast applications of IoT in healthcare, which has brought extensive attention of academia and industry. Assisted and smart tailored environments are possible with the implementation of such technologies that bring personal healthcare to any individual, while living in their preferred environments. In this paper we address several requirements for the development of such environments, namely the deployment of physiological signs monitoring systems, daily activity recognition techniques, as well as indoor air quality monitoring solutions. The machine learning methods that are most used in the literature for activity recognition and body motion analysis are also referred. Furthermore, the importance of physical and cognitive training of the elderly population through the implementation of exergames and immersive environments is also addressedinfo:eu-repo/semantics/publishedVersio

    State of the Art of Audio- and Video-Based Solutions for AAL

    Get PDF
    It is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters. Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals. Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely lifelogging and self-monitoring, remote monitoring of vital signs, emotional state recognition, food intake monitoring, activity and behaviour recognition, activity and personal assistance, gesture recognition, fall detection and prevention, mobility assessment and frailty recognition, and cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed

    Home healthcare using ubiquitous computing and robot technologies

    Get PDF
    The rapid increase of senior population worldwide is challenging the existing healthcare and support systems. Recently, smart home environments are utilized for ubiquitous health monitoring, allowing patients to stay in the comfort of their homes. In this dissertation, a Cloud-based Smart Home Environment (CoSHE) for home healthcare is presented, which consists of ambient intelligence, wearable computing, and robot technologies. The system includes a smart home which is embedded with distributed environmental sensors to support human localization. Wearable units are developed to collect physiological, motion and audio signals through non-invasive wearable sensors and provide contextual information in terms of the resident's daily activity and location in the home. This enables healthcare professionals to study daily activities, behavioral changes and monitor rehabilitation and recovery processes. The sensor data are processed in a smart home gateway and sent to a private cloud, which provides real-time data access for remote caregivers. Our case studies show that contextual information provided by ubiquitous computing can help better understand the patient's health status. With a robot assistant in the loop, we demonstrated that the CoSHE can facilitate healthcare delivery via interaction between human and robot
    corecore