130 research outputs found

    Signal Processing of Multimodal Mobile Lifelogging Data towards Detecting Stress in Real-World Driving

    Get PDF
    Stress is a negative emotion that is part of everyday life. However, frequent episodes or prolonged periods of stress can be detrimental to long-term health. Nevertheless, developing self-awareness is an important aspect of fostering effective ways to self-regulate these experiences. Mobile lifelogging systems provide an ideal platform to support self-regulation of stress by raising awareness of negative emotional states via continuous recording of psychophysiological and behavioural data. However, obtaining meaningful information from large volumes of raw data represents a significant challenge because these data must be accurately quantified and processed before stress can be detected. This work describes a set of algorithms designed to process multiple streams of lifelogging data for stress detection in the context of real world driving. Two data collection exercises have been performed where multimodal data, including raw cardiovascular activity and driving information, were collected from twenty-one people during daily commuter journeys. Our approach enabled us to 1) pre-process raw physiological data to calculate valid measures of heart rate variability, a significant marker of stress, 2) identify/correct artefacts in the raw physiological data and 3) provide a comparison between several classifiers for detecting stress. Results were positive and ensemble classification models provided a maximum accuracy of 86.9% for binary detection of stress in the real-world

    Exploiting linked data to create rich human digital memories

    Get PDF
    Memories are an important aspect of a person's life and experiences. The area of human digital memories focuses on encapsulating this phenomenon, in a digital format, over a lifetime. Through the proliferation of ubiquitous devices, both people and the surrounding environment are generating a phenomenal amount of data. With all of this disjointed information available, successfully searching it and bringing it together, to form a human digital memory, is a challenge. This is especially true when a lifetime of data is being examined. Linked Data provides an ideal, and novel, solution for overcoming this challenge, where a variety of data sources can be drawn upon to capture detailed information surrounding a given event. Memories, created in this way, contain vivid structures and varied data sources, which emerge through the semantic clustering of content and other memories. This paper presents DigMem, a platform for creating human digital memories, based on device-specific services and the user's current environment. In this way, information is semantically structured to create temporal "memory boxes" for human experiences. A working prototype has been successfully developed, which demonstrates the approach. In order to evaluate the applicability of the system a number of experiments have been undertaken. These have been successful in creating human digital memories and illustrating how a user can be monitored in both indoor and outdoor environments. Furthermore, the user's heartbeat information is analysed to determine his or her heart rate. This has been achieved with the development of a QRS Complex detection algorithm and heart rate calculation method. These methods process collected electrocardiography (ECG) information to discern the heart rate of the user

    Multimodal Wearable Intelligence for Dementia Care in Healthcare 4.0: A Survey

    Get PDF
    As a new revolution of Ubiquitous Computing and Internet of Things, multimodal wearable intelligence technique is rapidly becoming a new research topic in both academic and industrial fields. Owning to the rapid spread of wearable and mobile devices, this technique is evolving healthcare from traditional hub-based systems to more personalised healthcare systems. This trend is well-aligned with recent Healthcare 4.0 which is a continuous process of transforming the entire healthcare value chain to be preventive, precise, predictive and personalised, with significant benefits to elder care. But empowering the utility of multimodal wearable intelligence technique for elderly care like people with dementia is significantly challenging considering many issues, such as shortage of cost-effective wearable sensors, heterogeneity of wearable devices connected, high demand for interoperability, etc. Focusing on these challenges, this paper gives a systematic review of advanced multimodal wearable intelligence technologies for dementia care in Healthcare 4.0. One framework is proposed for reviewing the current research of wearable intelligence, and key enabling technologies, major applications, and successful case studies in dementia care, and finally points out future research trends and challenges in Healthcare 4.0

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Survey on virtual coaching for older adults

    Get PDF
    Virtual coaching has emerged as a promising solution to extend independent living for older adults. A virtual coach system is an always-attentive personalized system that continuously monitors user's activity and surroundings and delivers interventions - that is, intentional messages - in the appropriate moment. This article presents a survey of different approaches in virtual coaching for older adults, from the less technically supported tools to the latest developments and future avenues for research. It focuses on the technical aspects, especially on software architectures, user interaction and coaching personalization. Nevertheless, some aspects from the fields of personality/social psychology are also presented in the context of coaching strategies. Coaching is considered holistically, including matters such as physical and cognitive training, nutrition, social interaction and mood.The author(s) disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 769830

    Social relation recognition in egocentric photostreams

    Get PDF
    © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.This paper proposes an approach to automatically categorize the social interactions of a user wearing a photo-camera (2fpm), by relying solely on what the camera is seeing. The problem is challenging due to the overwhelming complexity of social life and the extreme intra-class variability of social interactions captured under unconstrained conditions. We adopt the formalization proposed in Bugental’s social theory, that groups human relations into five social domains with related categories. Our method is a new deep learning architecture that exploits the hierarchical structure of the label space and relies on a set of social attributes estimated at frame level to provide a semantic representation of social interactions. Experimental results on the new EgoSocialRelation dataset demonstrate the effectiveness of our proposal.Peer ReviewedPostprint (author's final draft

    Social Relation Recognition in Egocentric Photostreams

    Get PDF
    This paper proposes an approach to automatically categorize the social interactions of a user wearing a photo-camera 2fpm, by relying solely on what the camera is seeing. The problem is challenging due to the overwhelming complexity of social life and the extreme intra-class variability of social interactions captured under unconstrained conditions. We adopt the formalization proposed in Bugental's social theory, that groups human relations into five social domains with related categories. Our method is a new deep learning architecture that exploits the hierarchical structure of the label space and relies on a set of social attributes estimated at frame level to provide a semantic representation of social interactions. Experimental results on the new EgoSocialRelation dataset demonstrate the effectiveness of our proposal.Comment: Accepted at ICIP 201

    Exploiting linked data to create rich human digital memories

    Get PDF
    Memories are an important aspect of a person's life and experiences. The area of human digital memories focuses on encapsulating this phenomenon, in a digital format, over a lifetime. Through the proliferation of ubiquitous devices, both people and the surrounding environment are generating a phenomenal amount of data. With all of this disjointed information available, successfully searching it and bringing it together, to form a human digital memory, is a challenge. This is especially true when a lifetime of data is being examined. Linked Data provides an ideal, and novel, solution for overcoming this challenge, where a variety of data sources can be drawn upon to capture detailed information surrounding a given event. Memories, created in this way, contain vivid structures and varied data sources, which emerge through the semantic clustering of content and other memories. This paper presents DigMem, a platform for creating human digital memories, based on device-specific services and the user's current environment. In this way, information is semantically structured to create temporal "memory boxes" for human experiences. A working prototype has been successfully developed, which demonstrates the approach. In order to evaluate the applicability of the system a number of experiments have been undertaken. These have been successful in creating human digital memories and illustrating how a user can be monitored in both indoor and outdoor environments. Furthermore, the user's heartbeat information is analysed to determine his or her heart rate. This has been achieved with the development of a QRS Complex detection algorithm and heart rate calculation method. These methods process collected electrocardiography (ECG) information to discern the heart rate of the user. This information is essential in illustrating how certain situations can make the user feel. (C) 2013 Elsevier B.V. All rights reserved.
    corecore