3,936 research outputs found

    The perception of emotion in artificial agents

    Get PDF
    Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents

    Study and experimentation of cognitive decline measurements in a virtual reality environment

    Full text link
    À l’heure où le numérique s’est totalement imposé dans notre quotidien, nous pouvons nous demander comment évolue notre bien-être. La réalité virtuelle hautement immersive permet de développer des environnements propices à la relaxation qui peuvent améliorer les capacités cognitives et la qualité de vie de nombreuses personnes. Le premier objectif de cette étude est de réduire les émotions négatives et améliorer les capacités cognitives des personnes souffrant de déclin cognitif subjectif (DCS). À cette fin, nous avons développé un environnement de réalité virtuelle appelé Savannah VR, où les participants ont suivi un avatar à travers une savane. Nous avons recruté dix-neuf personnes atteintes de DCS pour participer à l’expérience virtuelle de la savane. Le casque Emotiv Epoc a capturé les émotions des participants pendant toute l’expérience virtuelle. Les résultats montrent que l’immersion dans la savane virtuelle a réduit les émotions négatives des participants et que les effets positifs ont continué par la suite. Les participants ont également amélioré leur performance cognitive. La confusion se manifeste souvent au cours de l’apprentissage lorsque les élèves ne comprennent pas de nouvelles connaissances. C’est un état qui est également très présent chez les personnes atteintes de démence à cause du déclin de leurs capacités cognitives. Détecter et surmonter la confusion pourrait ainsi améliorer le bien-être et les performances cognitives des personnes atteintes de troubles cognitifs. Le deuxième objectif de ce mémoire est donc de développer un outil pour détecter la confusion. Nous avons mené deux expérimentations et obtenu un modèle d’apprentissage automatique basé sur les signaux du cerveau pour reconnaître quatre niveaux de confusion (90% de précision). De plus, nous avons créé un autre modèle pour reconnaître la fonction cognitive liée à la confusion (82 % de précision).At a time when digital technology has become an integral part of our daily lives, we can ask ourselves how our well-being is evolving. Highly immersive virtual reality allows the development of environments that promote relaxation and can improve the cognitive abilities and quality of life of many people. The first aim of this study is to reduce the negative emotions and improve the cognitive abilities of people suffering from subjective cognitive decline (SCD). To this end, we have developed a virtual reality environment called Savannah VR, where participants followed an avatar across a savannah. We recruited nineteen people with SCD to participate in the virtual savannah experience. The Emotiv Epoc headset captured their emotions for the entire virtual experience. The results show that immersion in the virtual savannah reduced the negative emotions of the participants and that the positive effects continued afterward. Participants also improved their cognitive performance. Confusion often occurs during learning when students do not understand new knowledge. It is a state that is also very present in people with dementia because of the decline in their cognitive abilities. Detecting and overcoming confusion could thus improve the well-being and cognitive performance of people with cognitive impairment. The second objective of this paper is, therefore, to develop a tool to detect confusion. We conducted two experiments and obtained a machine learning model based on brain signals to recognize four levels of confusion (90% accuracy). In addition, we created another model to recognize the cognitive function related to the confusion (82% accuracy)

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio

    Linking recorded data with emotive and adaptive computing in an eHealth environment

    Get PDF
    Telecare, and particularly lifestyle monitoring, currently relies on the ability to detect and respond to changes in individual behaviour using data derived from sensors around the home. This means that a significant aspect of behaviour, that of an individuals emotional state, is not accounted for in reaching a conclusion as to the form of response required. The linked concepts of emotive and adaptive computing offer an opportunity to include information about emotional state and the paper considers how current developments in this area have the potential to be integrated within telecare and other areas of eHealth. In doing so, it looks at the development of and current state of the art of both emotive and adaptive computing, including its conceptual background, and places them into an overall eHealth context for application and development

    Logging Stress and Anxiety Using a Gamified Mobile-based EMA Application, and Emotion Recognition Using a Personalized Machine Learning Approach

    Get PDF
    According to American Psychological Association (APA) more than 9 in 10 (94 percent) adults believe that stress can contribute to the development of major health problems, such as heart disease, depression, and obesity. Due to the subjective nature of stress, and anxiety, it has been demanding to measure these psychological issues accurately by only relying on objective means. In recent years, researchers have increasingly utilized computer vision techniques and machine learning algorithms to develop scalable and accessible solutions for remote mental health monitoring via web and mobile applications. To further enhance accuracy in the field of digital health and precision diagnostics, there is a need for personalized machine-learning approaches that focus on recognizing mental states based on individual characteristics, rather than relying solely on general-purpose solutions. This thesis focuses on conducting experiments aimed at recognizing and assessing levels of stress and anxiety in participants. In the initial phase of the study, a mobile application with broad applicability (compatible with both Android and iPhone platforms) is introduced (we called it STAND). This application serves the purpose of Ecological Momentary Assessment (EMA). Participants receive daily notifications through this smartphone-based app, which redirects them to a screen consisting of three components. These components include a question that prompts participants to indicate their current levels of stress and anxiety, a rating scale ranging from 1 to 10 for quantifying their response, and the ability to capture a selfie. The responses to the stress and anxiety questions, along with the corresponding selfie photographs, are then analyzed on an individual basis. This analysis focuses on exploring the relationships between self-reported stress and anxiety levels and potential facial expressions indicative of stress and anxiety, eye features such as pupil size variation and eye closure, and specific action units (AUs) observed in the frames over time. In addition to its primary functions, the mobile app also gathers sensor data, including accelerometer and gyroscope readings, on a daily basis. This data holds potential for further analysis related to stress and anxiety. Furthermore, apart from capturing selfie photographs, participants have the option to upload video recordings of themselves while engaging in two neuropsychological games. These recorded videos are then subjected to analysis in order to extract pertinent features that can be utilized for binary classification of stress and anxiety (i.e., stress and anxiety recognition). The participants that will be selected for this phase are students aged between 18 and 38, who have received recent clinical diagnoses indicating specific stress and anxiety levels. In order to enhance user engagement in the intervention, gamified elements - an emerging trend to influence user behavior and lifestyle - has been utilized. Incorporating gamified elements into non-game contexts (e.g., health-related) has gained overwhelming popularity during the last few years which has made the interventions more delightful, engaging, and motivating. In the subsequent phase of this research, we conducted an AI experiment employing a personalized machine learning approach to perform emotion recognition on an established dataset called Emognition. This experiment served as a simulation of the future analysis that will be conducted as part of a more comprehensive study focusing on stress and anxiety recognition. The outcomes of the emotion recognition experiment in this study highlight the effectiveness of personalized machine learning techniques and bear significance for the development of future diagnostic endeavors. For training purposes, we selected three models, namely KNN, Random Forest, and MLP. The preliminary performance accuracy results for the experiment were 93%, 95%, and 87% respectively for these models

    A pediatric near-infrared spectroscopy brain-computer interface based on the detection of emotional valence

    Get PDF
    Brain-computer interfaces (BCIs) are being investigated as an access pathway to communication for individuals with physical disabilities, as the technology obviates the need for voluntary motor control. However, to date, minimal research has investigated the use of BCIs for children. Traditional BCI communication paradigms may be suboptimal given that children with physical disabilities may face delays in cognitive development and acquisition of literacy skills. Instead, in this study we explored emotional state as an alternative access pathway to communication. We developed a pediatric BCI to identify positive and negative emotional states from changes in hemodynamic activity of the prefrontal cortex (PFC). To train and test the BCI, 10 neurotypical children aged 8-14 underwent a series of emotion-induction trials over four experimental sessions (one offline, three online) while their brain activity was measured with functional near-infrared spectroscopy (fNIRS). Visual neurofeedback was used to assist participants in regulating their emotional states and modulating their hemodynamic activity in response to the affective stimuli. Child-specific linear discriminant classifiers were trained on cumulatively available data from previous sessions and adaptively updated throughout each session. Average online valence classification exceeded chance across participants by the last two online sessions (with 7 and 8 of the 10 participants performing better than chance, respectively, in Sessions 3 and 4). There was a small significant positive correlation with online BCI performance and age, suggesting older participants were more successful at regulating their emotional state and/or brain activity. Variability was seen across participants in regards to BCI performance, hemodynamic response, and discriminatory features and channels. Retrospective offline analyses yielded accuracies comparable to those reported in adult affective BCI studies using fNIRS. Affective fNIRS-BCIs appear to be feasible for school-aged children, but to further gauge the practical potential of this type of BCI, replication with more training sessions, larger sample sizes, and end-users with disabilities is necessary

    Sensor-based artificial intelligence to support people with cognitive and physical disorders

    Get PDF
    A substantial portion of the world's population deals with disability. Many disabled people do not have equal access to healthcare, education, and employment opportunities, do not receive specific disability-related services, and experience exclusion from everyday life activities. One way to face these issues is through the use of healthcare technologies. Unfortunately, there is a large amount of diverse and heterogeneous disabilities, which require ad-hoc and personalized solutions. Moreover, the design and implementation of effective and efficient technologies is a complex and expensive process involving challenging issues, including usability and acceptability. The work presented in this thesis aims to improve the current state of technologies available to support people with disorders affecting the mind or the motor system by proposing the use of sensors coupled with signal processing methods and artificial intelligence algorithms. The first part of the thesis focused on mental state monitoring. We investigated the application of a low-cost portable electroencephalography sensor and supervised learning methods to evaluate a person's attention. Indeed, the analysis of attention has several purposes, including the diagnosis and rehabilitation of children with attention-deficit/hyperactivity disorder. A novel dataset was collected from volunteers during an image annotation task, and used for the experimental evaluation using different machine learning techniques. Then, in the second part of the thesis, we focused on addressing limitations related to motor disability. We introduced the use of graph neural networks to process high-density electromyography data for upper limbs amputees’ movement/grasping intention recognition for enabling the use of robotic prostheses. High-density electromyography sensors can simultaneously acquire electromyography signals from different parts of the muscle, providing a large amount of spatio-temporal information that needs to be properly exploited to improve recognition accuracy. The investigation of the approach was conducted using a recent real-world dataset consisting of electromyography signals collected from 20 volunteers while performing 65 different gestures. In the final part of the thesis, we developed a prototype of a versatile interactive system that can be useful to people with different types of disabilities. The system can maintain a food diary for frail people with nutrition problems, such as people with neurocognitive diseases or frail elderly people, which may have difficulties due to forgetfulness or physical issues. The novel architecture automatically recognizes the preparation of food at home, in a privacy-preserving and unobtrusive way, exploiting air quality data acquired from a commercial sensor, statistical features extraction, and a deep neural network. A robotic system prototype is used to simplify the interaction with the inhabitant. For this work, a large dataset of annotated sensor data acquired over a period of 8 months from different individuals in different homes was collected. Overall, the results achieved in the thesis are promising, and pave the way for several real-world implementations and future research directions

    Emotion and Stress Recognition Related Sensors and Machine Learning Technologies

    Get PDF
    This book includes impactful chapters which present scientific concepts, frameworks, architectures and ideas on sensing technologies and machine learning techniques. These are relevant in tackling the following challenges: (i) the field readiness and use of intrusive sensor systems and devices for capturing biosignals, including EEG sensor systems, ECG sensor systems and electrodermal activity sensor systems; (ii) the quality assessment and management of sensor data; (iii) data preprocessing, noise filtering and calibration concepts for biosignals; (iv) the field readiness and use of nonintrusive sensor technologies, including visual sensors, acoustic sensors, vibration sensors and piezoelectric sensors; (v) emotion recognition using mobile phones and smartwatches; (vi) body area sensor networks for emotion and stress studies; (vii) the use of experimental datasets in emotion recognition, including dataset generation principles and concepts, quality insurance and emotion elicitation material and concepts; (viii) machine learning techniques for robust emotion recognition, including graphical models, neural network methods, deep learning methods, statistical learning and multivariate empirical mode decomposition; (ix) subject-independent emotion and stress recognition concepts and systems, including facial expression-based systems, speech-based systems, EEG-based systems, ECG-based systems, electrodermal activity-based systems, multimodal recognition systems and sensor fusion concepts and (x) emotion and stress estimation and forecasting from a nonlinear dynamical system perspective

    Multimodal approach for emotion recognition based on simulated flight experiments

    Get PDF
    The present work tries to fill part of the gap regarding the pilots' emotions and their bio-reactions during some flight procedures such as, takeoff, climbing, cruising, descent, initial approach, final approach and landing. A sensing architecture and a set of experiments were developed, associating it to several simulated flights ( N f l i g h t s = 13 ) using the Microsoft Flight Simulator Steam Edition (FSX-SE). The approach was carried out with eight beginner users on the flight simulator ( N p i l o t s = 8 ). It is shown that it is possible to recognize emotions from different pilots in flight, combining their present and previous emotions. The cardiac system based on Heart Rate (HR), Galvanic Skin Response (GSR) and Electroencephalography (EEG), were used to extract emotions, as well as the intensities of emotions detected from the pilot face. We also considered five main emotions: happy, sad, angry, surprise and scared. The emotion recognition is based on Artificial Neural Networks and Deep Learning techniques. The Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE) were the main methods used to measure the quality of the regression output models. The tests of the produced output models showed that the lowest recognition errors were reached when all data were considered or when the GSR datasets were omitted from the model training. It also showed that the emotion surprised was the easiest to recognize, having a mean RMSE of 0.13 and mean MAE of 0.01; while the emotion sad was the hardest to recognize, having a mean RMSE of 0.82 and mean MAE of 0.08. When we considered only the higher emotion intensities by time, the most matches accuracies were between 55% and 100%.info:eu-repo/semantics/publishedVersio
    • …
    corecore