610 research outputs found

    Designing interactive virtual environments with feedback in health applications.

    Get PDF
    One of the most important factors to influence user experience in human-computer interaction is the user emotional reaction. Interactive environments including serious games that are responsive to user emotions improve their effectiveness and user satisfactions. Testing and training for user emotional competence is meaningful in healthcare field, which has motivated us to analyze immersive affective games using emotional feedbacks. In this dissertation, a systematic model of designing interactive environment is presented, which consists of three essential modules: affect modeling, affect recognition, and affect control. In order to collect data for analysis and construct these modules, a series of experiments were conducted using virtual reality (VR) to evoke user emotional reactions and monitoring the reactions by physiological data. The analysis results lead to the novel approach of a framework to design affective gaming in virtual reality, including the descriptions on the aspects of interaction mechanism, graph-based structure, and user modeling. Oculus Rift was used in the experiments to provide immersive virtual reality with affective scenarios, and a sample application was implemented as cross-platform VR physical training serious game for elderly people to demonstrate the essential parts of the framework. The measurements of playability and effectiveness are discussed. The introduced framework should be used as a guiding principle for designing affective VR serious games. Possible healthcare applications include emotion competence training, educational softwares, as well as therapy methods

    Emotion Communication System

    Get PDF
    In today's increasingly rich material life, people are shifting their focus from the physical world to the spiritual world. In order to identify and care for people's emotions, human-machine interaction systems have been created. The currently available human-machine interaction systems often support the interaction between human and robot under the line-of-sight (LOS) propagation environment, while most communications in terms of human-to-human and human-to-machine are non-LOS (NLOS). In order to break the limitation of the traditional human–machine interaction system, we propose the emotion communication system based on NLOS mode. Specifically, we first define the emotion as a kind of multimedia which is similar to voice and video. The information of emotion can not only be recognized, but can also be transmitted over a long distance. Then, considering the real-time requirement of the communications between the involved parties, we propose an emotion communication protocol, which provides a reliable support for the realization of emotion communications. We design a pillow robot speech emotion communication system, where the pillow robot acts as a medium for user emotion mapping. Finally, we analyze the real-time performance of the whole communication process in the scene of a long distance communication between a mother-child users' pair, to evaluate the feasibility and effectiveness of emotion communications

    MeditAid:a wearable adaptive neurofeedback-based system for training mindfulness state

    Get PDF
    A recent interest in interaction design is towards the development of novel technologies emphasizing the value of mindfulness, monitoring, awareness, and self-regulation for both health and wellbeing. Whereas existing systems have focused mostly on relaxation and awareness of feelings, there has been little exploration on tools supporting the self-regulation of attention during mindfulness sitting meditation. This paper describes the design and initial evaluation of MeditAid, a wearable system integrating electroencephalography (EEG) technology with an adaptive aural entrainment for real time training of mindfulness state. The system identifies different meditative states and provides feedback to support users in deepening their meditation. We report on a study with 16 meditators about the perceived strengths and limitations of the MeditAid system. We demonstrate the benefits of binaural feedback in deepening meditative states, particularly for novice meditators

    AffectiveViz:Designing Collective Stress Related Visualization

    Get PDF

    Low-cost methodologies and devices applied to measure, model and self-regulate emotions for Human-Computer Interaction

    Get PDF
    En aquesta tesi s'exploren les diferents metodologies d'anàlisi de l'experiència UX des d'una visió centrada en usuari. Aquestes metodologies clàssiques i fonamentades només permeten extreure dades cognitives, és a dir les dades que l'usuari és capaç de comunicar de manera conscient. L'objectiu de la tesi és proposar un model basat en l'extracció de dades biomètriques per complementar amb dades emotives (i formals) la informació cognitiva abans esmentada. Aquesta tesi no és només teòrica, ja que juntament amb el model proposat (i la seva evolució) es mostren les diferents proves, validacions i investigacions en què s'han aplicat, sovint en conjunt amb grups de recerca d'altres àrees amb èxit.En esta tesis se exploran las diferentes metodologías de análisis de la experiencia UX desde una visión centrada en usuario. Estas metodologías clásicas y fundamentadas solamente permiten extraer datos cognitivos, es decir los datos que el usuario es capaz de comunicar de manera consciente. El objetivo de la tesis es proponer un modelo basado en la extracción de datos biométricos para complementar con datos emotivos (y formales) la información cognitiva antes mencionada. Esta tesis no es solamente teórica, ya que junto con el modelo propuesto (y su evolución) se muestran las diferentes pruebas, validaciones e investigaciones en la que se han aplicado, a menudo en conjunto con grupos de investigación de otras áreas con éxito.In this thesis, the different methodologies for analyzing the UX experience are explored from a user-centered perspective. These classical and well-founded methodologies only allow the extraction of cognitive data, that is, the data that the user is capable of consciously communicating. The objective of this thesis is to propose a methodology that uses the extraction of biometric data to complement the aforementioned cognitive information with emotional (and formal) data. This thesis is not only theoretical, since the proposed model (and its evolution) is complemented with the different tests, validations and investigations in which they have been applied, often in conjunction with research groups from other areas with success

    INNOVATING CONTROL AND EMOTIONAL EXPRESSIVE MODALITIES OF USER INTERFACES FOR PEOPLE WITH LOCKED-IN SYNDROME

    Get PDF
    Patients with Lock-In-Syndrome (LIS) lost their ability to control any body part beside their eyes. Current solutions mainly use eye-tracking cameras to track patients' gaze as system input. However, despite the fact that interface design greatly impacts user experience, only a few guidelines have been were proposed so far to insure an easy, quick, fluid and non-tiresome computer system for these patients. On the other hand, the emergence of dedicated computer software has been greatly increasing the patients' capabilities, but there is still a great need for improvements as existing systems still present low usability and limited capabilities. Most interfaces designed for LIS patients aim at providing internet browsing or communication abilities. State of the art augmentative and alternative communication systems mainly focus on sentences communication without considering the need for emotional expression inextricable from human communication. This thesis aims at exploring new system control and expressive modalities for people with LIS. Firstly, existing gaze-based web-browsing interfaces were investigated. Page analysis and high mental workload appeared as recurring issues with common systems. To address this issue, a novel user interface was designed and evaluated against a commercial system. The results suggested that it is easier to learn and to use, quicker, more satisfying, less frustrating, less tiring and less prone to error. Mental workload was greatly diminished with this system. Other types of system control for LIS patients were then investigated. It was found that galvanic skin response may be used as system input and that stress related bio-feedback helped lowering mental workload during stressful tasks. Improving communication was one of the main goal of this research and in particular emotional communication. A system including a gaze-controlled emotional voice synthesis and a personal emotional avatar was developed with this purpose. Assessment of the proposed system highlighted the enhanced capability to have dialogs more similar to normal ones, to express and to identify emotions. Enabling emotion communication in parallel to sentences was found to help with the conversation. Automatic emotion detection seemed to be the next step toward improving emotional communication. Several studies established that physiological signals relate to emotions. The ability to use physiological signals sensors with LIS patients and their non-invasiveness made them an ideal candidate for this study. One of the main difficulties of emotion detection is the collection of high intensity affect-related data. Studies in this field are currently mostly limited to laboratory investigations, using laboratory-induced emotions, and are rarely adapted for real-life applications. A virtual reality emotion elicitation technique based on appraisal theories was proposed here in order to study physiological signals of high intensity emotions in a real-life-like environment. While this solution successfully elicited positive and negative emotions, it did not elicit the desired emotions for all subject and was therefore, not appropriate for the goals of this research. Collecting emotions in the wild appeared as the best methodology toward emotion detection for real-life applications. The state of the art in the field was therefore reviewed and assessed using a specifically designed method for evaluating datasets collected for emotion recognition in real-life applications. The proposed evaluation method provides guidelines for future researcher in the field. Based on the research findings, a mobile application was developed for physiological and emotional data collection in the wild. Based on appraisal theory, this application provides guidance to users to provide valuable emotion labelling and help them differentiate moods from emotions. A sample dataset collected using this application was compared to one collected using a paper-based preliminary study. The dataset collected using the mobile application was found to provide a more valuable dataset with data consistent with literature. This mobile application was used to create an open-source affect-related physiological signals database. While the path toward emotion detection usable in real-life application is still long, we hope that the tools provided to the research community will represent a step toward achieving this goal in the future. Automatically detecting emotion could not only be used for LIS patients to communicate but also for total-LIS patients who have lost their ability to move their eyes. Indeed, giving the ability to family and caregiver to visualize and therefore understand the patients' emotional state could greatly improve their quality of life. This research provided tools to LIS patients and the scientific community to improve augmentative and alternative communication, technologies with better interfaces, emotion expression capabilities and real-life emotion detection. Emotion recognition methods for real-life applications could not only enhance health care but also robotics, domotics and many other fields of study. A complete system fully gaze-controlled was made available open-source with all the developed solutions for LIS patients. This is expected to enhance their daily lives by improving their communication and by facilitating the development of novel assistive systems capabilities

    Adaptive architecture: Regulating human building interaction

    Get PDF
    In this paper we explore regulatory, technical and interactional implications of Adaptive Architecture, a novel trend emerging in the built environment. We provide a comprehensive description of the emergence and history of the term, with reference to the current state of the art and policy foundations supporting it e.g. smart city initiatives and building regulations. As Adaptive Architecture is underpinned by the Internet of Things (IoT), we are interested in how regulatory and surveillance issues posed by the IoT manifest in buildings too. To support our analysis, we utilise a prominent concept from architecture, Stuart Brand’s Shearing Layers model, which describes the different physical layers of a building and how they relate to temporal change. To ground our analysis, we use three cases of Adaptive Architecture, namely an IoT device (Nest Smart Cam IQ); an Adaptive Architecture research prototype, (ExoBuilding); and a commercial deployment (the Edge). In bringing together Shearing Layers, Adaptive Architecture and the challenges therein, we frame our analysis under 5 key themes. These are guided by emerging information privacy and security regulations. We explore the issues Adaptive Architecture needs to face for: A – ‘Physical & information security’; B – ‘Establishing responsibility’; C – ‘occupant rights over flows, collection, use & control of personal data’; D- ‘Visibility of Emotions and Bodies’; & E – ‘Surveillance of Everyday Routine Activities’. We conclude by summarising key challenges for Adaptive Architecture, regulation and the future of human building interaction

    Adaptive Model for Biofeedback Data Flows Management in the Design of Interactive Immersive Environments

    Get PDF
    [Abstract] The interactivity of an immersive environment comes up from the relationship that is established between the user and the system. This relationship results in a set of data exchanges between human and technological actors. The real-time biofeedback devices allow to collect in real time the biodata generated by the user during the exhibition. The analysis, processing and conversion of these biodata into multimodal data allows to relate the stimuli with the emotions they trigger. This work describes an adaptive model for biofeedback data flows management used in the design of interactive immersive systems. The use of an affective algorithm allows to identify the types of emotions felt by the user and the respective intensities. The mapping between stimuli and emotions creates a set of biodata that can be used as elements of interaction that will readjust the stimuli generated by the system. The real-time interaction generated by the evolution of the user’s emotional state and the stimuli generated by the system allows him to adapt attitudes and behaviors to the situations he faces

    A Review on personalization in Mobile Learning

    Get PDF
    Abstract Over the last decade, several studies and researches showed the importance and the necessity to use mobile learning during the learning/teaching process. Mobile Learning (ML), nowadays gains more attention technically and pedagogically. This review of literature deals with the personalization issue in mobile learning, and how agents can be used to support solving this issue, the main objective of this study is to review recent and up to date studies on personalization in mobile learning and find if there are any gaps in the existing literature. The review process started with a primary search which resulted (200)articles, then preparing a checklist(Aims, Research Design, framework, and Justification of the findings), after that selecting the most relevant articles (27) according to some general questions, then the analysis process started and resulted some gaps in the existing literature. Results shows that most of the studies concentrate on one issue of the personalization such as (Device Capabilities, Student Level, Student"s Preferences, Network Issues, Course "Subject", Device Operating System, and Location), also most of them assure that agents are a solution for personalization in mobile learning. So there is a need for more investigating on how to deploy agents more effectively to support more personalization in mobile learning
    • …
    corecore