82 research outputs found

    A Mobile Lifelogging Platform to Measure Anxiety and Anger During Real-Life Driving

    Get PDF
    The experience of negative emotions in everyday life, such as anger and anxiety, can have adverse effects on long-term cardiovascular health. However, objective measurements provided by mobile technology can promote insight into this psychobiological process and promote self-awareness and adaptive coping. It is postulated that the creation of a mobile lifelogging platform can support this approach by continuously recording personal data via mobile/wearable devices and processing this information to measure physiological correlates of negative emotions. This paper describes the development of a mobile lifelogging system that measures anxiety and anger during real-life driving. A number of data streams have been incorporated in the platform, including cardiovascular data, speed of the vehicle and first-person photographs of the environment. In addition, thirteen participants completed five days of data collection during daily commuter journeys to test the system. The design of the system hardware and associated data streams are described in the current paper, along with the results of preliminary data analysis

    Signal Processing of Multimodal Mobile Lifelogging Data towards Detecting Stress in Real-World Driving

    Get PDF
    Stress is a negative emotion that is part of everyday life. However, frequent episodes or prolonged periods of stress can be detrimental to long-term health. Nevertheless, developing self-awareness is an important aspect of fostering effective ways to self-regulate these experiences. Mobile lifelogging systems provide an ideal platform to support self-regulation of stress by raising awareness of negative emotional states via continuous recording of psychophysiological and behavioural data. However, obtaining meaningful information from large volumes of raw data represents a significant challenge because these data must be accurately quantified and processed before stress can be detected. This work describes a set of algorithms designed to process multiple streams of lifelogging data for stress detection in the context of real world driving. Two data collection exercises have been performed where multimodal data, including raw cardiovascular activity and driving information, were collected from twenty-one people during daily commuter journeys. Our approach enabled us to 1) pre-process raw physiological data to calculate valid measures of heart rate variability, a significant marker of stress, 2) identify/correct artefacts in the raw physiological data and 3) provide a comparison between several classifiers for detecting stress. Results were positive and ensemble classification models provided a maximum accuracy of 86.9% for binary detection of stress in the real-world

    Personal informatics and negative emotions during commuter driving:Effects of data visualization on cardiovascular reactivity & mood

    Get PDF
    Mobile technology and wearable sensors can provide objective measures of psychological stress in everyday life. Data from sensors can be visualized and viewed by the user to increase self-awareness and promote adaptive coping strategies. A capacity to effectively self-regulate negative emotion can mitigate the biological process of inflammation, which has implications for long-term health. Two studies were undertaken utilizing a mobile lifelogging platform to collect cardiovascular data over a week of real-life commuter driving. The first was designed to establish a link between cardiovascular markers of inflammation and the experience of anger during commuter driving in the real world. Results indicated that an ensemble classification model provided an accuracy rate of 73.12% for the binary classification of episodes of high vs. low anger based upon a combination of features derived from driving (e.g. vehicle speed) and cardiovascular psychophysiology (heart rate, heart rate variability, pulse transit time). During the second study, participants interacted with an interactive, geolocated visualisation of vehicle parameters, photographs and cardiovascular psychophysiology collected over two days of commuter driving (pre-test). Data were subsequently collected over two days of driving following their interaction with the dynamic, data visualization (post-test). A comparison of pre- and post-test data revealed that heart rate significantly reduced during episodes of journey impedance after interaction with the data visualization. There was also evidence that heart rate variability increased during the post-test phase, suggesting greater vagal activation and adaptive coping. Subjective mood data were collected before and after each journey, but no statistically significant differences were observed between pre- and post-test periods. The implications of both studies for ambulatory monitoring, user interaction and the capacity of personal informatics to enhance long-term health are discussed

    Detecting Negative Emotions During Real-Life Driving via Dynamically Labelled Physiological Data

    Get PDF
    Driving is an activity that can induce significant levels of negative emotion, such as stress and anger. These negative emotions occur naturally in everyday life, but frequent episodes can be detrimental to cardiovascular health in the long term. The development of monitoring systems to detect negative emotions often rely on labels derived from subjective self-report. However, this approach is burdensome, intrusive, low fidelity (i.e. scales are administered infrequently) and places huge reliance on the veracity of subjective self-report. This paper explores an alternative approach that provides greater fidelity by using psychophysiological data (e.g. heart rate) to dynamically label data derived from the driving task (e.g. speed, road type). A number of different techniques for generating labels for machine learning were compared: 1) deriving labels from subjective self-report and 2) labelling data via psychophysiological activity (e.g. heart rate (HR), pulse transit time (PTT), etc.) to create dynamic labels of high vs. low anxiety for each participant. The classification accuracy associated with both labelling techniques was evaluated using Linear Discriminant Analysis (LDA) and Support Vector Machines (SVM). Results indicated that classification of driving data using subjective labelled data (1) achieved a maximum AUC of 73%, whilst the labels derived from psychophysiological data (2) achieved equivalent performance of 74%. Whilst classification performance was similar, labelling driving data via psychophysiology offers a number of advantages over self-reports, e.g. implicit, dynamic, objective, high fidelity

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Remember you will [not] die: Mortality versus immortality in a world of patterns and randomness

    Get PDF
    This project seeks to explore our perception of death, mortality, mourning processes and death-related rituals within the online world. I specifically look at the Social Network site Facebook as a case study. I argue that new media - such as digital technology like the Internet - is transforming the way we see ourselves and the world, affecting our perception of death and death-related aspects, yet not replacing offline practices. My exploration takes Katherine Hayles’ notion of pattern/randomness as the key theoretical axis and lens of my research. I strengthen my argument by creating a fictional future scenario situated ten years from now in a Western developed cultural context (i.e. Toronto), where trends of online practices are exaggerated. My studio work, in this scenario, are mourning pieces. It takes the form of wearable technology products that convey Facebook data of a deceased user, and are meant to be used by the bereaved. However, I have created these products from a critical design perspective, wherein products work as commentary, provocation to the public and a way to open up discussion on the topic. This thesis project is interdisciplinary in content and form, looking at and exploring fields such as critical theory, philosophy, social science, critical design, various design practices and advertising

    Personal Healthcare Agents for Monitoring and Predicting Stress and Hypertension from Biosignals

    Get PDF
    We live in exciting times. The fast paced growth in mobile computers has put powerful computational devices in the palm of our hands. Blazing fast connectivity has made human-human, human-machine, and machine-machine communication effortless. Wearable devices and the internet of things have made monitoring every aspect of our lives easier. This has given rise to the domain of quantified self where we can continuous record and quantify the various signals generated in everyday life. Sensors on smartphones can continuously record our location and motion profile. Sensors on wearable devices can track changes in our bodies’ physiological responses. This monitoring also has the capability to revolutionise the health care domain by creating more informed and involved patients. This has the potential to shift care-management from a physician-centric approach to a patient-centric approach allowing individuals to create more empowered patients and individuals who are in better control of their health. However, the data deluge from all these sources can sometimes be overwhelming. There is a need for intelligent technology that can help us navigate the data and take informed decisions. The goal of this work is to develop a mobile, personal intelligent agent platform that can become a digital companion to live with the user. It can monitor the covert and overt signal streams of the user, identify activity and stress levels to help the users’ make healthy choices regarding their lives. This thesis particularly targets patients suffering from or at-risk of essential hypertension since its a difficult condition to detect and manage. This thesis delivers the following contributions: 1) An intelligent personal agent platform for on-the-go continuous monitoring of covert and overt signals. 2) A machine learning algorithm for accurate recognition of activities using smartphone signals recorded from in-the-wild scenarios. 3) A machine learning pipeline to combine various physiological signal streams, motion profiles, and user annotations for on-the-go stress recognition. 4) We design and train a complete signal processing and classification system for hypertension prediction. 5) Through a small pilot study we demonstrate that this system can distinguish between hypertensive and normotensive subjects with high accuracy

    Exploring the Use of Online Social Network Activity and Smartphone Photography as an Intervention to Track and Influence Emotional Well-Being

    Get PDF
    The proliferation of internet and mobile technologies has expanded the means of detecting and influencing mental health, with this thesis focusing on the affective phenomena associated with emotional well-being including mood, affect and emotion. Traditional detection techniques including surveys and self-reports are grounded in the psychological literature; however, they introduce an inhibiting burden on the participants. The ability to passively detect psychological state using technologies including online behavioural tracking and mobile sensors is a prevalent focus of the current literature. Traditional positive psychology interventions commonly involve emotionally expressive writing tasks which can also be tedious for participants. Augmenting traditional intervention techniques with technologies such as smartphone applications can be one method to modernise interventions. The first research study in this thesis aimed to utilise online social network (OSN) activity to detect mood changes. The study involved collecting the participants' behavioural activities such as likes, comments and tweets from their Facebook and Twitter profiles. Machine learning was used to create an algorithm to classify participants according to their online activity and their self-reported mood as ground truth. The findings indicated that participants can be grouped into those who displayed positive, negative or weak correlations with their online activity. Following the classification, the system used a sliding window of 7 days to track the participant's mood changes for those in the positive and negative groups. The second research study introduced a positive psychology intervention in the form of a smartphone application called SnapAppy which promotes positive thinking by integrating momentary smartphone photography with traditional intervention methodologies. Participants were required to take photos and write about positive moments, past events, acts of kindness and gratuitous situations, encouraging them to think more positively. The results indicated that features such as the number of photos taken, the effort applied to annotating the photos, the number of photos revisited and the photos containing people were positively correlated with an improvement in mood and affect. The product of this thesis is a novel method of passively tracking mood changes using online social network activity and an innovative smartphone intervention utilising photography to influence emotional well-being

    Ethics of lifelog technology

    Get PDF
    In a lifelog, data from different digital sources are combined and processed to form a unified multimedia archive containing information about the quotidian activities of an individual. This dissertation aims to contribute to a responsible development of lifelog technology used by members of the general public for private reasons. Lifelog technology can benefit, but also harm lifeloggers and their social environment. The guiding idea behind this dissertation is that if the ethical challenges can be met and the opportunities realised, the conditions will be optimised for a responsible development and application of the technology. To achieve this, it is important to reflect on these concerns at an early stage of development before the existing rudimentary forms of lifelogs develop into more sophisticated devices with a broad societal application. For this research, a normative framework based on prima facie principles is used. Lifelog technology in its current form is a relatively novel invention and a consensus about its definition is still missing. Therefore the author aims to clarify the characteristics of lifelog technology. Next, the ethical challenges and opportunities of lifelogs are analysed, as they have been discussed in the scholarly literature on the ethics of lifelog technology. Against this backdrop, ethical challenges and opportunities are identified and elaborated. The normative analysis concentrates on two areas of concern, namely (1) the ethical challenges and opportunities that result from the use of lifelog technology, and (2) the conditions under which one becomes a lifelogger. For the first, three sets of key issues are discussed, namely issues to do with (a) privacy, (b) autonomy, and (c) beneficence. For the second, one key set of issues is examined, namely issues to do with autonomy. The discussion of each set of issues is concluded with recommendations designed to tackle the challenges and realise the opportunities

    INNOVATING CONTROL AND EMOTIONAL EXPRESSIVE MODALITIES OF USER INTERFACES FOR PEOPLE WITH LOCKED-IN SYNDROME

    Get PDF
    Patients with Lock-In-Syndrome (LIS) lost their ability to control any body part beside their eyes. Current solutions mainly use eye-tracking cameras to track patients' gaze as system input. However, despite the fact that interface design greatly impacts user experience, only a few guidelines have been were proposed so far to insure an easy, quick, fluid and non-tiresome computer system for these patients. On the other hand, the emergence of dedicated computer software has been greatly increasing the patients' capabilities, but there is still a great need for improvements as existing systems still present low usability and limited capabilities. Most interfaces designed for LIS patients aim at providing internet browsing or communication abilities. State of the art augmentative and alternative communication systems mainly focus on sentences communication without considering the need for emotional expression inextricable from human communication. This thesis aims at exploring new system control and expressive modalities for people with LIS. Firstly, existing gaze-based web-browsing interfaces were investigated. Page analysis and high mental workload appeared as recurring issues with common systems. To address this issue, a novel user interface was designed and evaluated against a commercial system. The results suggested that it is easier to learn and to use, quicker, more satisfying, less frustrating, less tiring and less prone to error. Mental workload was greatly diminished with this system. Other types of system control for LIS patients were then investigated. It was found that galvanic skin response may be used as system input and that stress related bio-feedback helped lowering mental workload during stressful tasks. Improving communication was one of the main goal of this research and in particular emotional communication. A system including a gaze-controlled emotional voice synthesis and a personal emotional avatar was developed with this purpose. Assessment of the proposed system highlighted the enhanced capability to have dialogs more similar to normal ones, to express and to identify emotions. Enabling emotion communication in parallel to sentences was found to help with the conversation. Automatic emotion detection seemed to be the next step toward improving emotional communication. Several studies established that physiological signals relate to emotions. The ability to use physiological signals sensors with LIS patients and their non-invasiveness made them an ideal candidate for this study. One of the main difficulties of emotion detection is the collection of high intensity affect-related data. Studies in this field are currently mostly limited to laboratory investigations, using laboratory-induced emotions, and are rarely adapted for real-life applications. A virtual reality emotion elicitation technique based on appraisal theories was proposed here in order to study physiological signals of high intensity emotions in a real-life-like environment. While this solution successfully elicited positive and negative emotions, it did not elicit the desired emotions for all subject and was therefore, not appropriate for the goals of this research. Collecting emotions in the wild appeared as the best methodology toward emotion detection for real-life applications. The state of the art in the field was therefore reviewed and assessed using a specifically designed method for evaluating datasets collected for emotion recognition in real-life applications. The proposed evaluation method provides guidelines for future researcher in the field. Based on the research findings, a mobile application was developed for physiological and emotional data collection in the wild. Based on appraisal theory, this application provides guidance to users to provide valuable emotion labelling and help them differentiate moods from emotions. A sample dataset collected using this application was compared to one collected using a paper-based preliminary study. The dataset collected using the mobile application was found to provide a more valuable dataset with data consistent with literature. This mobile application was used to create an open-source affect-related physiological signals database. While the path toward emotion detection usable in real-life application is still long, we hope that the tools provided to the research community will represent a step toward achieving this goal in the future. Automatically detecting emotion could not only be used for LIS patients to communicate but also for total-LIS patients who have lost their ability to move their eyes. Indeed, giving the ability to family and caregiver to visualize and therefore understand the patients' emotional state could greatly improve their quality of life. This research provided tools to LIS patients and the scientific community to improve augmentative and alternative communication, technologies with better interfaces, emotion expression capabilities and real-life emotion detection. Emotion recognition methods for real-life applications could not only enhance health care but also robotics, domotics and many other fields of study. A complete system fully gaze-controlled was made available open-source with all the developed solutions for LIS patients. This is expected to enhance their daily lives by improving their communication and by facilitating the development of novel assistive systems capabilities
    • 

    corecore