221 research outputs found

    Automatic Detection of Self-Adaptors for Psychological Distress

    Get PDF
    Psychological distress is a significant and growing issue in society. Automatic detection, assessment, and analysis of such distress is an active area of research. Compared to modalities such as face, head, and vocal, research investigating the use of the body modality for these tasks is relatively sparse. This is, in part, due to the lack of available datasets and difficulty in automatically extracting useful body features. Recent advances in pose estimation and deep learning have enabled new approaches to this modality and domain. We propose a novel method to automatically detect self-adaptors and fidgeting, a subset of self-adaptors that has been shown to be correlated with psychological distress. We also propose a multi-modal approach that combines different feature representations using Multi-modal Deep Denoising Auto-Encoders and Improved Fisher Vector encoding. We also demonstrate that our proposed model, combining audio-visual features with automatically detected fidgeting behavioral cues, can successfully predict distress levels in a dataset labeled with self-reported anxiety and depression levels. To enable this research we introduce a new dataset containing full body videos for short interviews and self-reported distress labels.King's College, Cmabridg

    Stochastic modelling of transition dynamic of mixed mood episodes in bipolar disorder

    Get PDF
    In the present state of health and wellness, mental illness is always deemed less importance compared to other forms of physical illness. In reality, mental illness causes serious multi-dimensional adverse effect to the subject with respect to personal life, social life, as well as financial stability. In the area of mental illness, bipolar disorder is one of the most prominent type which can be triggered by any external stimulation to the subject suffering from this illness. There diagnosis as well as treatment process of bipolar disorder is very much different from other form of illness where the first step of impediment is the correct diagnosis itself. According to the standard body, there are classification of discrete forms of bipolar disorder viz. type-I, type-II, and cyclothymic. Which is characterized by specific mood associated with depression and mania. However, there is no study associated with mixed-mood episode detection which is characterized by combination of various symptoms of bipolar disorder in random, unpredictable, and uncertain manner. Hence, the model contributes to obtain granular information with dynamics of mood transition. The simulated outcome of the proposed system in MATLAB shows that resulting model is capable enough for detection of mixed mood episode precisel

    Integration of Wavelet and Recurrence Quantification Analysis in Emotion Recognition of Bilinguals

    Get PDF
    Background: This study offers a robust framework for the classification of autonomic signals into five affective states during the picture viewing. To this end, the following emotion categories studied: five classes of the arousal-valence plane (5C), three classes of arousal (3A), and three categories of valence (3V). For the first time, the linguality information also incorporated into the recognition procedure. Precisely, the main objective of this paper was to present a fundamental approach for evaluating and classifying the emotions of monolingual and bilingual college students.Methods: Utilizing the nonlinear dynamics, the recurrence quantification measures of the wavelet coefficients extracted. To optimize the feature space, different feature selection approaches, including generalized discriminant analysis (GDA), principal component analysis (PCA), kernel PCA, and linear discriminant analysis (LDA), were examined. Finally, considering linguality information, the classification was performed using a probabilistic neural network (PNN).Results: Using LDA and the PNN, the highest recognition rates of 95.51%, 95.7%, and 95.98% were attained for the 5C, 3A, and 3V, respectively. Considering the linguality information, a further improvement of the classification rates accomplished.Conclusion: The proposed methodology can provide a valuable tool for discriminating affective states in practical applications within the area of human-computer interfaces

    Embedded system for real-time emotional arousal classification

    Get PDF
    We, humans, can distinguish the emotions of others with ease and we always expect any sort of emotional response during a conversation. Machines, however, do not possess emotion related skills, which makes human-machine interactions feel alien and soulless. Therefore, development of an efficient emotion recognition system is one of the crucial steps towards human-like artificial intelligence. A common person can also find use in emotion recognition. It would be a great help to the people, who by various reason either have weak control over own emotions or devoid of any ability to perceive emotions of others. This thesis focuses on creating a solution based on compact hardware to classify emotions in relation to its level of arousal. For this, theory concerning the emotions and their classifications were gathered, after which numerous methods of machine learning and feature description were reviewed and tried out. The methods list support vector machines, random forests, facial landmark feature extraction and histogram of oriented gradients. The project has came to a halt halfway through due to poor results: small scale hardware appeared unsuitable for extensive machine learning operations. It can be resumed with the possibility of introducing another set of hardware purely for recognition models training and leaving the compact one deal with pre-made model. In estonian: Me, inimesed, oskame kergelt tajuda teiste emotsioone, ning ootame mingi emotsionaalset tagasisidet suhtlemise korral. Masinad, kuid, ei oma emotsioonidega seotud oskust, mistõttu inimese ja masina vastastikmõju tundub hingetu ja võõrana. Seepärast, tõhusa emotsiooni tunnustamise arendus on üks ülioluline samm inimesesarnase tehisintellekti suuna. Tava inimene ka saab leida kasu emotsiooni tunnustamises. See saab aidata inimesi, kellel on erinevate põhjuste tõttu nõrk kontroll oma emotsioonide üle või nad ei saa teiste emotsioone tundma. Käesolev töö keskendub kompaktse riistvara baseeritud lahenduse peale emotsiooni liigitamiseks sõltuvalt temast erutusest. Selleks, emotsiooni puudutav teooria oli kogutud, mille pärast arvukad masinõppimise ja tunnuste ekstraheerimise meetodid olid vaadeldatud ja ära proovitud. Need meetodid on tugivektor-masinad, otsustusmetsad, näoorientiiri tunnuste ekstraheerimine ja suunatud gradientide histogramm. Kehva tulemuste tõttu projekt jäi seisma: väikese mastaabi riistvara kujunes vimetuks laiaulatusliku masinõppimise sooritamise jaoks. Seda saab jätkata, kui lisada projekti võimeka riistvara, et ta treeniks tajumiste muudelit ja edastaks kompaktsele riistvarale juba eeltreenitud muudelit rakendamiseks

    An empirical study of embodied music listening, and its applications in mediation technology

    Get PDF

    Affective Brain-Computer Interfaces

    Get PDF

    Automated screening methods for mental and neuro-developmental disorders

    Get PDF
    Mental and neuro-developmental disorders such as depression, bipolar disorder, and autism spectrum disorder (ASD) are critical healthcare issues which affect a large number of people. Depression, according to the World Health Organisation, is the largest cause of disability worldwide and affects more than 300 million people. Bipolar disorder affects more than 60 million individuals worldwide. ASD, meanwhile, affects more than 1 in 100 people in the UK. Not only do these disorders adversely affect the quality of life of affected individuals, they also have a significant economic impact. While brute-force approaches are potentially useful for learning new features which could be representative of these disorders, such approaches may not be best suited for developing robust screening methods. This is due to a myriad of confounding factors, such as the age, gender, cultural background, and socio-economic status, which can affect social signals of individuals in a similar way as the symptoms of these disorders. Brute-force approaches may learn to exploit effects of these confounding factors on social signals in place of effects due to mental and neuro-developmental disorders. The main objective of this thesis is to develop, investigate, and propose computational methods to screen for mental and neuro-developmental disorders in accordance with descriptions given in the Diagnostic and Statistical Manual (DSM). The DSM manual is a guidebook published by the American Psychiatric Association which offers common language on mental disorders. Our motivation is to alleviate, to an extent, the possibility of machine learning algorithms picking up one of the confounding factors to optimise performance for the dataset – something which we do not find uncommon in research literature. To this end, we introduce three new methods for automated screening for depression from audio/visual recordings, namely: turbulence features, craniofacial movement features, and Fisher Vector based representation of speech spectra. We surmise that psychomotor changes due to depression lead to uniqueness in an individual's speech pattern which manifest as sudden and erratic changes in speech feature contours. The efficacy of these features is demonstrated as part of our solution to Audio/Visual Emotion Challenge 2017 (AVEC 2017) on Depression severity prediction. We also detail a methodology to quantify specific craniofacial movements, which we hypothesised could be indicative of psychomotor retardation, and hence depression. The efficacy of craniofacial movement features is demonstrated using datasets from the 2014 and 2017 editions of AVEC Depression severity prediction challenges. Finally, using the dataset provided as part of AVEC 2016 Depression classification challenge, we demonstrate that differences between speech of individuals with and without depression can be quantified effectively using the Fisher Vector representation of speech spectra. For our work on automated screening of bipolar disorder, we propose methods to classify individuals with bipolar disorder into states of remission, hypo-mania, and mania. Here, we surmise that like depression, individuals with different levels of mania have certain uniqueness to their social signals. Based on this understanding, we propose the use of turbulence features for audio/visual social signals (i.e. speech and facial expressions). We also propose the use of Fisher Vectors to create a unified representation of speech in terms of prosody, voice quality, and speech spectra. These methods have been proposed as part of our solution to the AVEC 2018 Bipolar disorder challenge. In addition, we find that the task of automated screening for ASD is much more complicated. Here, confounding factors can easily overwhelm socials signals which are affected by ASD. We discuss, in the light of research literature and our experimental analysis, that significant collaborative work is required between computer scientists and clinicians to discern social signals which are robust to common confounding factors

    Affective Computing

    Get PDF
    This book provides an overview of state of the art research in Affective Computing. It presents new ideas, original results and practical experiences in this increasingly important research field. The book consists of 23 chapters categorized into four sections. Since one of the most important means of human communication is facial expression, the first section of this book (Chapters 1 to 7) presents a research on synthesis and recognition of facial expressions. Given that we not only use the face but also body movements to express ourselves, in the second section (Chapters 8 to 11) we present a research on perception and generation of emotional expressions by using full-body motions. The third section of the book (Chapters 12 to 16) presents computational models on emotion, as well as findings from neuroscience research. In the last section of the book (Chapters 17 to 22) we present applications related to affective computing
    corecore