537 research outputs found

    Designing interactive virtual environments with feedback in health applications.

    Get PDF
    One of the most important factors to influence user experience in human-computer interaction is the user emotional reaction. Interactive environments including serious games that are responsive to user emotions improve their effectiveness and user satisfactions. Testing and training for user emotional competence is meaningful in healthcare field, which has motivated us to analyze immersive affective games using emotional feedbacks. In this dissertation, a systematic model of designing interactive environment is presented, which consists of three essential modules: affect modeling, affect recognition, and affect control. In order to collect data for analysis and construct these modules, a series of experiments were conducted using virtual reality (VR) to evoke user emotional reactions and monitoring the reactions by physiological data. The analysis results lead to the novel approach of a framework to design affective gaming in virtual reality, including the descriptions on the aspects of interaction mechanism, graph-based structure, and user modeling. Oculus Rift was used in the experiments to provide immersive virtual reality with affective scenarios, and a sample application was implemented as cross-platform VR physical training serious game for elderly people to demonstrate the essential parts of the framework. The measurements of playability and effectiveness are discussed. The introduced framework should be used as a guiding principle for designing affective VR serious games. Possible healthcare applications include emotion competence training, educational softwares, as well as therapy methods

    Xylo-Bot: A Therapeutic Robot-Based Music Platform for Children with Autism

    Get PDF
    Children with Autism Spectrum Disorder (ASD) experience deficits in verbal and nonverbal communication skills, including motor control, emotional facial expressions, and eye gaze / joint attention. This Ph.D. dissertation focuses on studying the feasibility and effectiveness of using a social robot, called NAO, and a toy music instrument, xylophone, at modeling and improving the social responses and behaviors of children with ASD. In our investigation, we designed an autonomous social interactive music teaching system to fulfill this mission. A novel modular robot-music teaching system consisting of three modules is presented. Module 1 provides an autonomous self-awareness positioning system for the robot to localize the instrument and make a micro adjustment for the arm joints to play the note bars properly. Module 2 allows the robot to be able to play any customized song per userā€™s request. This design provides an opportunity to translate songs into C-major or a-minor with a set of hexadecimal numbers without music experience. After the music score converted robot should be able to play it immediately. Module 3 is designed for providing real-life music teaching experience for the users. Two key features of this module are a) music detection and b) smart scoring and feedback . Short-time Fourier transform and Levenshtein distance are adapted to fulfill the design requirements, which allow the robot to understand music and provide a proper dosage of practice and oral feedback to users. A new instrument has designed to present better emotions from music due to the limitation of the original xylophone. This new programmable xylophone can provide a more extensive frequency range of notes, easily switch between the Major and Minor keys, extensively easy to control, and have fun with it as an advanced music instrument. Because our initial intention has been to study emotion in children with autism, an automated method for emotion classification in children using electrodermal activity (EDA) signals. The time-frequency analysis of the acquired raw EDAs provides a feature space based on which different emotions can be recognized. To this end, the complex Morlet (C-Morlet) wavelet function is applied to the recorded EDA signals. The dataset used in this research includes a set of multimodal recordings of social and communicative behavior as well as EDA recordings of 100 children younger than 30 months old. The dataset is annotated by two experts to extract the time sequence corresponding to three primary emotions, including ā€œJoyā€, ā€œBoredomā€, and ā€œAcceptanceā€. Various experiments are conducted on the annotated EDA signals to classify emotions using a support vector machine (SVM) classifier. The quantitative results show that emotion classification performance remarkably improves compared to other methods when the proposed wavelet-based features are used. By using this emotion classification, emotion engagement during sessions, and feelings between different music can be detected after data analysis. NAO music education platform will be thought-about as a decent tool to facilitate improving fine motor control, turn-taking skills, and social activities engagement. Most of the ASD youngsters began to develop the strike movement within the two initial intervention sessions; some even mastered the motor ability throughout the early events. More than half of the subjects could dominate proper turn-taking after few sessions. Music teaching is a good example for accomplishing social skill tasks by taking advantage of customized songs selected by individuals. According to researcher and video annotator, majority of the subjects showed high level of engagement for all music game activities, especially with the free play mode. Based on the conversation and music performance with NAO, subjects showed strong interest in challenging the robot with a friendly way

    Multimodal Data Analysis of Dyadic Interactions for an Automated Feedback System Supporting Parent Implementation of Pivotal Response Treatment

    Get PDF
    abstract: Parents fulfill a pivotal role in early childhood development of social and communication skills. In children with autism, the development of these skills can be delayed. Applied behavioral analysis (ABA) techniques have been created to aid in skill acquisition. Among these, pivotal response treatment (PRT) has been empirically shown to foster improvements. Research into PRT implementation has also shown that parents can be trained to be effective interventionists for their children. The current difficulty in PRT training is how to disseminate training to parents who need it, and how to support and motivate practitioners after training. Evaluation of the parentsā€™ fidelity to implementation is often undertaken using video probes that depict the dyadic interaction occurring between the parent and the child during PRT sessions. These videos are time consuming for clinicians to process, and often result in only minimal feedback for the parents. Current trends in technology could be utilized to alleviate the manual cost of extracting data from the videos, affording greater opportunities for providing clinician created feedback as well as automated assessments. The naturalistic context of the video probes along with the dependence on ubiquitous recording devices creates a difficult scenario for classification tasks. The domain of the PRT video probes can be expected to have high levels of both aleatory and epistemic uncertainty. Addressing these challenges requires examination of the multimodal data along with implementation and evaluation of classification algorithms. This is explored through the use of a new dataset of PRT videos. The relationship between the parent and the clinician is important. The clinician can provide support and help build self-efficacy in addition to providing knowledge and modeling of treatment procedures. Facilitating this relationship along with automated feedback not only provides the opportunity to present expert feedback to the parent, but also allows the clinician to aid in personalizing the classification models. By utilizing a human-in-the-loop framework, clinicians can aid in addressing the uncertainty in the classification models by providing additional labeled samples. This will allow the system to improve classification and provides a person-centered approach to extracting multimodal data from PRT video probes.Dissertation/ThesisDoctoral Dissertation Computer Science 201

    Automated screening methods for mental and neuro-developmental disorders

    Get PDF
    Mental and neuro-developmental disorders such as depression, bipolar disorder, and autism spectrum disorder (ASD) are critical healthcare issues which affect a large number of people. Depression, according to the World Health Organisation, is the largest cause of disability worldwide and affects more than 300 million people. Bipolar disorder affects more than 60 million individuals worldwide. ASD, meanwhile, affects more than 1 in 100 people in the UK. Not only do these disorders adversely affect the quality of life of affected individuals, they also have a significant economic impact. While brute-force approaches are potentially useful for learning new features which could be representative of these disorders, such approaches may not be best suited for developing robust screening methods. This is due to a myriad of confounding factors, such as the age, gender, cultural background, and socio-economic status, which can affect social signals of individuals in a similar way as the symptoms of these disorders. Brute-force approaches may learn to exploit effects of these confounding factors on social signals in place of effects due to mental and neuro-developmental disorders. The main objective of this thesis is to develop, investigate, and propose computational methods to screen for mental and neuro-developmental disorders in accordance with descriptions given in the Diagnostic and Statistical Manual (DSM). The DSM manual is a guidebook published by the American Psychiatric Association which offers common language on mental disorders. Our motivation is to alleviate, to an extent, the possibility of machine learning algorithms picking up one of the confounding factors to optimise performance for the dataset ā€“ something which we do not find uncommon in research literature. To this end, we introduce three new methods for automated screening for depression from audio/visual recordings, namely: turbulence features, craniofacial movement features, and Fisher Vector based representation of speech spectra. We surmise that psychomotor changes due to depression lead to uniqueness in an individual's speech pattern which manifest as sudden and erratic changes in speech feature contours. The efficacy of these features is demonstrated as part of our solution to Audio/Visual Emotion Challenge 2017 (AVEC 2017) on Depression severity prediction. We also detail a methodology to quantify specific craniofacial movements, which we hypothesised could be indicative of psychomotor retardation, and hence depression. The efficacy of craniofacial movement features is demonstrated using datasets from the 2014 and 2017 editions of AVEC Depression severity prediction challenges. Finally, using the dataset provided as part of AVEC 2016 Depression classification challenge, we demonstrate that differences between speech of individuals with and without depression can be quantified effectively using the Fisher Vector representation of speech spectra. For our work on automated screening of bipolar disorder, we propose methods to classify individuals with bipolar disorder into states of remission, hypo-mania, and mania. Here, we surmise that like depression, individuals with different levels of mania have certain uniqueness to their social signals. Based on this understanding, we propose the use of turbulence features for audio/visual social signals (i.e. speech and facial expressions). We also propose the use of Fisher Vectors to create a unified representation of speech in terms of prosody, voice quality, and speech spectra. These methods have been proposed as part of our solution to the AVEC 2018 Bipolar disorder challenge. In addition, we find that the task of automated screening for ASD is much more complicated. Here, confounding factors can easily overwhelm socials signals which are affected by ASD. We discuss, in the light of research literature and our experimental analysis, that significant collaborative work is required between computer scientists and clinicians to discern social signals which are robust to common confounding factors

    Machine Learning Methods for Structural Brain MRIs: Applications for Alzheimerā€™s Disease and Autism Spectrum Disorder

    Get PDF
    This thesis deals with the development of novel machine learning applications to automatically detect brain disorders based on magnetic resonance imaging (MRI) data, with a particular focus on Alzheimerā€™s disease and the autism spectrum disorder. Machine learning approaches are used extensively in neuroimaging studies of brain disorders to investigate abnormalities in various brain regions. However, there are many technical challenges in the analysis of neuroimaging data, for example, high dimensionality, the limited amount of data, and high variance in that data due to many confounding factors. These limitations make the development of appropriate computational approaches more challenging. To deal with these existing challenges, we target multiple machine learning approaches, including supervised and semi-supervised learning, domain adaptation, and dimensionality reduction methods.In the current study, we aim to construct effective biomarkers with sufficient sensitivity and speciļ¬city that can help physicians better understand the diseases and make improved diagnoses or treatment choices. The main contributions are 1) development of a novel biomarker for predicting Alzheimerā€™s disease in mild cognitive impairment patients by integrating structural MRI data and neuropsychological test results and 2) the development of a new computational approach for predicting disease severity in autistic patients in agglomerative data by automatically combining structural information obtained from different brain regions.In addition, we investigate various data-driven feature selection and classiļ¬cation methods for whole brain, voxel-based classiļ¬cation analysis of structural MRI and the use of semi-supervised learning approaches to predict Alzheimerā€™s disease. We also analyze the relationship between disease-related structural changes and cognitive states of patients with Alzheimerā€™s disease.The positive results of this effort provide insights into how to construct better biomarkers based on multisource data analysis of patient and healthy cohorts that may enable early diagnosis of brain disorders, detection of brain abnormalities and understanding effective processing in patient and healthy groups. Further, the methodologies and basic principles presented in this thesis are not only suited to the studied cases, but also are applicable to other similar problems

    Emotion recognition from syllabic units using k-nearest-neighbor classification and energy distribution

    Get PDF
    In this article, we present an automatic technique for recognizing emotional states from speech signals. The main focus of this paper is to present an efficient and reduced set of acoustic features that allows us to recognize the four basic human emotions (anger, sadness, joy, and neutral). The proposed features vector is composed by twenty-eight measurements corresponding to standard acoustic features such as formants, fundamental frequency (obtained by Praat software) as well as introducing new features based on the calculation of the energies in some specific frequency bands and their distributions (thanks to MATLAB codes). The extracted measurements are obtained from syllabic unitsā€™ consonant/vowel (CV) derived from Moroccan Arabic dialect emotional database (MADED) corpus. Thereafter, the data which has been collected is then trained by a k-nearest-neighbor (KNN) classifier to perform the automated recognition phase. The results reach 64.65% in the multi-class classification and 94.95% for classification between positive and negative emotions

    Ubiquitous Technologies for Emotion Recognition

    Get PDF
    Emotions play a very important role in how we think and behave. As such, the emotions we feel every day can compel us to act and influence the decisions and plans we make about our lives. Being able to measure, analyze, and better comprehend how or why our emotions may change is thus of much relevance to understand human behavior and its consequences. Despite the great efforts made in the past in the study of human emotions, it is only now, with the advent of wearable, mobile, and ubiquitous technologies, that we can aim to sense and recognize emotions, continuously and in real time. This book brings together the latest experiences, findings, and developments regarding ubiquitous sensing, modeling, and the recognition of human emotions

    Models and Analysis of Vocal Emissions for Biomedical Applications

    Get PDF
    The International Workshop on Models and Analysis of Vocal Emissions for Biomedical Applications (MAVEBA) came into being in 1999 from the particularly felt need of sharing know-how, objectives and results between areas that until then seemed quite distinct such as bioengineering, medicine and singing. MAVEBA deals with all aspects concerning the study of the human voice with applications ranging from the newborn to the adult and elderly. Over the years the initial issues have grown and spread also in other fields of research such as occupational voice disorders, neurology, rehabilitation, image and video analysis. MAVEBA takes place every two years in Firenze, Italy. This edition celebrates twenty-two years of uninterrupted and successful research in the field of voice analysis
    • ā€¦
    corecore