187 research outputs found

    Preliminary Test of Affective Virtual Reality Scenes with Head Mount Display for Emotion Elicitation Experiment

    Get PDF
    Emotion elicitation experiments are conducted to collect biological signals from a subject who is in a state of emotion. The recorded signals are used as training/test dataset for constructing an emotion recognition system by means of machine learning. In conventional emotion elicitation experiments, affective images or videos were provided for a subject to draw out an emotion from them. However, the authors have concerns about the effectiveness. To surely evoke a specific emotion from subjects, we have produced several Virtual Reality (VR) scenes and provided the subjects with the scenes through a Head Mount Display (HMD) in emotion elicitation experiments. Usability and effectiveness of the VR scenes with the HMD for emotion elicitation were experimentally verified. It was confirmed that experience of the VR scenes with the HMD was effective in evoking emotions, but we have to improve how subjects learn a way of playing VR scenes and provide measures against VR sickness at any cost. Moreover, Support Vector Machine classifiers as an emotion recognition system were constructed using the biological signals measured from the subjects in the emotion elicitation experiments.2017 17th International Conference on Control, Automation and Systems (ICCAS 2017), Oct. 18-21, 2017, Ramada Plaza, Jeju, Kore

    Novel Muscle Monitoring by Radiomyography(RMG) and Application to Hand Gesture Recognition

    Full text link
    Conventional electromyography (EMG) measures the continuous neural activity during muscle contraction, but lacks explicit quantification of the actual contraction. Mechanomyography (MMG) and accelerometers only measure body surface motion, while ultrasound, CT-scan and MRI are restricted to in-clinic snapshots. Here we propose a novel radiomyography (RMG) for continuous muscle actuation sensing that can be wearable and touchless, capturing both superficial and deep muscle groups. We verified RMG experimentally by a forearm wearable sensor for detailed hand gesture recognition. We first converted the radio sensing outputs to the time-frequency spectrogram, and then employed the vision transformer (ViT) deep learning network as the classification model, which can recognize 23 gestures with an average accuracy up to 99% on 8 subjects. By transfer learning, high adaptivity to user difference and sensor variation were achieved at an average accuracy up to 97%. We further demonstrated RMG to monitor eye and leg muscles and achieved high accuracy for eye movement and body postures tracking. RMG can be used with synchronous EMG to derive stimulation-actuation waveforms for many future applications in kinesiology, physiotherapy, rehabilitation, and human-machine interface

    Affective reactions towards socially interactive agents and their computational modeling

    Get PDF
    Over the past 30 years, researchers have studied human reactions towards machines applying the Computers Are Social Actors paradigm, which contrasts reactions towards computers with reactions towards humans. The last 30 years have also seen improvements in technology that have led to tremendous changes in computer interfaces and the development of Socially Interactive Agents. This raises the question of how humans react to Socially Interactive Agents. To answer these questions, knowledge from several disciplines is required, which is why this interdisciplinary dissertation is positioned within psychology and computer science. It aims to investigate affective reactions to Socially Interactive Agents and how these can be modeled computationally. Therefore, after a general introduction and background, this thesis first provides an overview of the Socially Interactive Agent system used in this work. Second, it presents a study comparing a human and a virtual job interviewer, which shows that both interviewers induce shame in participants to the same extent. Thirdly, it reports on a study investigating obedience towards Socially Interactive Agents. The results indicate that participants obey human and virtual instructors in similar ways. Furthermore, both types of instructors evoke feelings of stress and shame to the same extent. Fourth, a stress management training using biofeedback with a Socially Interactive Agent is presented. The study shows that a virtual trainer can teach coping techniques for emotionally challenging social situations. Fifth, it introduces MARSSI, a computational model of user affect. The evaluation of the model shows that it is possible to relate sequences of social signals to affective reactions, taking into account emotion regulation processes. Finally, the Deep method is proposed as a starting point for deeper computational modeling of internal emotions. The method combines social signals, verbalized introspection information, context information, and theory-driven knowledge. An exemplary application to the emotion shame and a schematic dynamic Bayesian network for its modeling are illustrated. Overall, this thesis provides evidence that human reactions towards Socially Interactive Agents are very similar to those towards humans, and that it is possible to model these reactions computationally.In den letzten 30 Jahren haben Forschende menschliche Reaktionen auf Maschinen untersucht und dabei das “Computer sind soziale Akteure”-Paradigma genutzt, in dem Reaktionen auf Computer mit denen auf Menschen verglichen werden. In den letzten 30 Jahren hat sich ebenfalls die Technologie weiterentwickelt, was zu einer enormen Veränderung der Computerschnittstellen und der Entwicklung von sozial interaktiven Agenten geführt hat. Dies wirft Fragen zu menschlichen Reaktionen auf sozial interaktive Agenten auf. Um diese Fragen zu beantworten, ist Wissen aus mehreren Disziplinen erforderlich, weshalb diese interdisziplinäre Dissertation innerhalb der Psychologie und Informatik angesiedelt ist. Sie zielt darauf ab, affektive Reaktionen auf sozial interaktive Agenten zu untersuchen und zu erforschen, wie diese computational modelliert werden können. Nach einer allgemeinen Einführung in das Thema gibt diese Arbeit daher, erstens, einen Überblick über das Agentensystem, das in der Arbeit verwendet wird. Zweitens wird eine Studie vorgestellt, in der eine menschliche und eine virtuelle Jobinterviewerin miteinander verglichen werden, wobei sich zeigt, dass beide Interviewerinnen bei den Versuchsteilnehmenden Schamgefühle in gleichem Maße auslösen. Drittens wird eine Studie berichtet, in der Gehorsam gegenüber sozial interaktiven Agenten untersucht wird. Die Ergebnisse deuten darauf hin, dass Versuchsteilnehmende sowohl menschlichen als auch virtuellen Anleiterinnen ähnlich gehorchen. Darüber hinaus werden durch beide Instruktorinnen gleiche Maße von Stress und Scham hervorgerufen. Viertens wird ein Biofeedback-Stressmanagementtraining mit einer sozial interaktiven Agentin vorgestellt. Die Studie zeigt, dass die virtuelle Trainerin Techniken zur Bewältigung von emotional herausfordernden sozialen Situationen vermitteln kann. Fünftens wird MARSSI, ein computergestütztes Modell des Nutzeraffekts, vorgestellt. Die Evaluation des Modells zeigt, dass es möglich ist, Sequenzen von sozialen Signalen mit affektiven Reaktionen unter Berücksichtigung von Emotionsregulationsprozessen in Beziehung zu setzen. Als letztes wird die Deep-Methode als Ausgangspunkt für eine tiefer gehende computergestützte Modellierung von internen Emotionen vorgestellt. Die Methode kombiniert soziale Signale, verbalisierte Introspektion, Kontextinformationen und theoriegeleitetes Wissen. Eine beispielhafte Anwendung auf die Emotion Scham und ein schematisches dynamisches Bayes’sches Netz zu deren Modellierung werden dargestellt. Insgesamt liefert diese Arbeit Hinweise darauf, dass menschliche Reaktionen auf sozial interaktive Agenten den Reaktionen auf Menschen sehr ähnlich sind und dass es möglich ist diese menschlichen Reaktion computational zu modellieren.Deutsche Forschungsgesellschaf

    Blind Source Separation for the Processing of Contact-Less Biosignals

    Get PDF
    (Spatio-temporale) Blind Source Separation (BSS) eignet sich für die Verarbeitung von Multikanal-Messungen im Bereich der kontaktlosen Biosignalerfassung. Ziel der BSS ist dabei die Trennung von (z.B. kardialen) Nutzsignalen und Störsignalen typisch für die kontaktlosen Messtechniken. Das Potential der BSS kann praktisch nur ausgeschöpft werden, wenn (1) ein geeignetes BSS-Modell verwendet wird, welches der Komplexität der Multikanal-Messung gerecht wird und (2) die unbestimmte Permutation unter den BSS-Ausgangssignalen gelöst wird, d.h. das Nutzsignal praktisch automatisiert identifiziert werden kann. Die vorliegende Arbeit entwirft ein Framework, mit dessen Hilfe die Effizienz von BSS-Algorithmen im Kontext des kamera-basierten Photoplethysmogramms bewertet werden kann. Empfehlungen zur Auswahl bestimmter Algorithmen im Zusammenhang mit spezifischen Signal-Charakteristiken werden abgeleitet. Außerdem werden im Rahmen der Arbeit Konzepte für die automatisierte Kanalauswahl nach BSS im Bereich der kontaktlosen Messung des Elektrokardiogramms entwickelt und bewertet. Neuartige Algorithmen basierend auf Sparse Coding erwiesen sich dabei als besonders effizient im Vergleich zu Standard-Methoden.(Spatio-temporal) Blind Source Separation (BSS) provides a large potential to process distorted multichannel biosignal measurements in the context of novel contact-less recording techniques for separating distortions from the cardiac signal of interest. This potential can only be practically utilized (1) if a BSS model is applied that matches the complexity of the measurement, i.e. the signal mixture and (2) if permutation indeterminacy is solved among the BSS output components, i.e the component of interest can be practically selected. The present work, first, designs a framework to assess the efficacy of BSS algorithms in the context of the camera-based photoplethysmogram (cbPPG) and characterizes multiple BSS algorithms, accordingly. Algorithm selection recommendations for certain mixture characteristics are derived. Second, the present work develops and evaluates concepts to solve permutation indeterminacy for BSS outputs of contact-less electrocardiogram (ECG) recordings. The novel approach based on sparse coding is shown to outperform the existing concepts of higher order moments and frequency-domain features

    Stress Reduction Using Bilateral Stimulation in Virtual Reality

    Get PDF
    The goal of this research is to integrate Virtual Reality (VR) with the bilateral stimulation used in EMDR as a tool to relieve stress. We created a 15 minutes relaxation training program for adults in a virtual, relaxing environment in form of a walk in the woods. The target platform for the tool is HTC Vive, however it can be easily ported to other VR platforms. An integral part of this tool is a set of sensors, which serves as physiological measures to evaluate the effectiveness of such system. What is more, the system integrate visual (passing sphere), auditory (surround sound) and tactile signals (vibration of controllers). A pilot treatment programme, incorporating the above mentioned VR system, was carried out. Experimental group consisting of 28 healthy adult volunteers (office workers), participated in three different sessions of relaxation training. Before starting, baseline features such as subjectively perceived stress, mood, heart rate, galvanic skin response and muscle response were registered. The monitoring of physiological indicators is continued during the training session and one minute after its completion. Before and after the session, volunteers were asked to re-fill questionnaires regarding the current stress level and mood. The obtained results were analyzed in terms of variability over time: before, during and after the session

    Shape Memory Actuators for Medical Rehabilitation and Neuroscience

    Get PDF

    Psychophysiological analysis of a pedagogical agent and robotic peer for individuals with autism spectrum disorders.

    Get PDF
    Autism spectrum disorder (ASD) is a neurodevelopmental disorder characterized by ongoing problems in social interaction and communication, and engagement in repetitive behaviors. According to Centers for Disease Control and Prevention, an estimated 1 in 68 children in the United States has ASD. Mounting evidence shows that many of these individuals display an interest in social interaction with computers and robots and, in general, feel comfortable spending time in such environments. It is known that the subtlety and unpredictability of people’s social behavior are intimidating and confusing for many individuals with ASD. Computerized learning environments and robots, however, prepare a predictable, dependable, and less complicated environment, where the interaction complexity can be adjusted so as to account for these individuals’ needs. The first phase of this dissertation presents an artificial-intelligence-based tutoring system which uses an interactive computer character as a pedagogical agent (PA) that simulates a human tutor teaching sight word reading to individuals with ASD. This phase examines the efficacy of an instructional package comprised of an autonomous pedagogical agent, automatic speech recognition, and an evidence-based instructional procedure referred to as constant time delay (CTD). A concurrent multiple-baseline across-participants design is used to evaluate the efficacy of intervention. Additionally, post-treatment probes are conducted to assess maintenance and generalization. The results suggest that all three participants acquired and maintained new sight words and demonstrated generalized responding. The second phase of this dissertation describes the augmentation of the tutoring system developed in the first phase with an autonomous humanoid robot which serves the instructional role of a peer for the student. In this tutoring paradigm, the robot adopts a peer metaphor, where its function is to act as a peer. With the introduction of the robotic peer (RP), the traditional dyadic interaction in tutoring systems is augmented to a novel triadic interaction in order to enhance the social richness of the tutoring system, and to facilitate learning through peer observation. This phase evaluates the feasibility and effects of using PA-delivered sight word instruction, based on a CTD procedure, within a small-group arrangement including a student with ASD and the robotic peer. A multiple-probe design across word sets, replicated across three participants, is used to evaluate the efficacy of intervention. The findings illustrate that all three participants acquired, maintained, and generalized all the words targeted for instruction. Furthermore, they learned a high percentage (94.44% on average) of the non-target words exclusively instructed to the RP. The data show that not only did the participants learn nontargeted words by observing the instruction to the RP but they also acquired their target words more efficiently and with less errors by the addition of an observational component to the direct instruction. The third and fourth phases of this dissertation focus on physiology-based modeling of the participants’ affective experiences during naturalistic interaction with the developed tutoring system. While computers and robots have begun to co-exist with humans and cooperatively share various tasks; they are still deficient in interpreting and responding to humans as emotional beings. Wearable biosensors that can be used for computerized emotion recognition offer great potential for addressing this issue. The third phase presents a Bluetooth-enabled eyewear – EmotiGO – for unobtrusive acquisition of a set of physiological signals, i.e., skin conductivity, photoplethysmography, and skin temperature, which can be used as autonomic readouts of emotions. EmotiGO is unobtrusive and sufficiently lightweight to be worn comfortably without interfering with the users’ usual activities. This phase presents the architecture of the device and results from testing that verify its effectiveness against an FDA-approved system for physiological measurement. The fourth and final phase attempts to model the students’ engagement levels using their physiological signals collected with EmotiGO during naturalistic interaction with the tutoring system developed in the second phase. Several physiological indices are extracted from each of the signals. The students’ engagement levels during the interaction with the tutoring system are rated by two trained coders using the video recordings of the instructional sessions. Supervised pattern recognition algorithms are subsequently used to map the physiological indices to the engagement scores. The results indicate that the trained models are successful at classifying participants’ engagement levels with the mean classification accuracy of 86.50%. These models are an important step toward an intelligent tutoring system that can dynamically adapt its pedagogical strategies to the affective needs of learners with ASD

    Low-Cost Sensors and Biological Signals

    Get PDF
    Many sensors are currently available at prices lower than USD 100 and cover a wide range of biological signals: motion, muscle activity, heart rate, etc. Such low-cost sensors have metrological features allowing them to be used in everyday life and clinical applications, where gold-standard material is both too expensive and time-consuming to be used. The selected papers present current applications of low-cost sensors in domains such as physiotherapy, rehabilitation, and affective technologies. The results cover various aspects of low-cost sensor technology from hardware design to software optimization

    Roadmap on signal processing for next generation measurement systems

    Get PDF
    Signal processing is a fundamental component of almost any sensor-enabled system, with a wide range of applications across different scientific disciplines. Time series data, images, and video sequences comprise representative forms of signals that can be enhanced and analysed for information extraction and quantification. The recent advances in artificial intelligence and machine learning are shifting the research attention towards intelligent, data-driven, signal processing. This roadmap presents a critical overview of the state-of-the-art methods and applications aiming to highlight future challenges and research opportunities towards next generation measurement systems. It covers a broad spectrum of topics ranging from basic to industrial research, organized in concise thematic sections that reflect the trends and the impacts of current and future developments per research field. Furthermore, it offers guidance to researchers and funding agencies in identifying new prospects.AerodynamicsMicrowave Sensing, Signals & System

    24th Bilateral Student Workshop CTU Prague and HTW Dresden - User Interfaces & Visualization

    Get PDF
    This technical report publishes the proceedings of the 24th Bilateral Student Workshop CTU Prague and HTW Dresden -User Interfaces & Visualization-, which was held on the 26th November 2021. The workshop offers a possibility for young scientists to present their current research work in the fields of computer graphics, human-computer-interaction, robotics and usability. The works is meant as a platform to bring together researchers from both the Czech Technical University in Prague (CTU) and the University of Applied Sciences Dresden (HTW). The German Academic Exchange Service offers its financial support to allow student participants the bilateral exchange between Prague and Dresden.:1) Robot assisted reminiscence therapy for people with dementia, p.4 2) VENT-CONECT: System for remote monitoring of instruments used in intensive care, p.12 3) Conversational assistant for smart home, p.17 4) Perspectives and challenges of the research project ”SYNC ID” , p.23 5) Music-based emotional biofeedback: the state of the art and challenges, p.26 6) Ambient Assisted Living Lab - Smart Systems and CoCreation, p.30 7) Board Game Playing and Consuming Beverages in VR, p.36 8) An approach to measure and increase the level of participation of people with dementia in cognitive games, p.41 9) Forced perspective illusions and scaling users in VR - state of the art., p.47 10) Training Deep Learning Models for Punctuation Prediction, p.51 11) Towards an Evaluation of Ambiguity in Point-Feature Labelling, p.56 12) The ReZA method goes digital, p.60 13) Haptic interface for spatial audio web player, p.6
    • …
    corecore