65 research outputs found

    Towards longitudinal data analytics in Parkinson's Disease

    Get PDF
    The CloudUPDRS app has been developed as a Class I med- ical device to assess the severity of motor symptoms for Parkinson’s Disease using a fully automated data capture and signal analysis pro- cess based on the standard Unified Parkinson’s Disease Rating Scale. In this paper we report on the design and development of the signal pro- cessing and longitudinal data analytics microservices developed to carry out these assessments and to forecast the long-term development of the disease. We also report on early findings from the application of these techniques in the wild with a cohort of early adopters

    From wellness to medical diagnostic apps: the Parkinson's Disease case

    Get PDF
    This paper presents the design and development of the CloudUPDRS app and supporting system developed as a Class I medical device to assess the severity of motor symptoms for Parkinson’s Disease. We report on lessons learnt towards meeting fidelity and regulatory requirements; effective procedures employed to structure user context and ensure data quality; a robust service provision architecture; a dependable analytics toolkit; and provisions to meet mobility and social needs of people with Parkinson’s

    Semi-supervised Multi-modal Emotion Recognition with Cross-Modal Distribution Matching

    Full text link
    Automatic emotion recognition is an active research topic with wide range of applications. Due to the high manual annotation cost and inevitable label ambiguity, the development of emotion recognition dataset is limited in both scale and quality. Therefore, one of the key challenges is how to build effective models with limited data resource. Previous works have explored different approaches to tackle this challenge including data enhancement, transfer learning, and semi-supervised learning etc. However, the weakness of these existing approaches includes such as training instability, large performance loss during transfer, or marginal improvement. In this work, we propose a novel semi-supervised multi-modal emotion recognition model based on cross-modality distribution matching, which leverages abundant unlabeled data to enhance the model training under the assumption that the inner emotional status is consistent at the utterance level across modalities. We conduct extensive experiments to evaluate the proposed model on two benchmark datasets, IEMOCAP and MELD. The experiment results prove that the proposed semi-supervised learning model can effectively utilize unlabeled data and combine multi-modalities to boost the emotion recognition performance, which outperforms other state-of-the-art approaches under the same condition. The proposed model also achieves competitive capacity compared with existing approaches which take advantage of additional auxiliary information such as speaker and interaction context.Comment: 10 pages, 5 figures, to be published on ACM Multimedia 202

    Enhanced Processing of Threat Stimuli under Limited Attentional Resources

    Get PDF
    The ability to process stimuli that convey potential threat, under conditions of limited attentional resources, confers adaptive advantages. This study examined the neurobiology underpinnings of this capacity. Employing an attentional blink paradigm, in conjunction with functional magnetic resonance imaging, we manipulated the salience of the second of 2 face target stimuli (T2), by varying emotionality. Behaviorally, fearful T2 faces were identified significantly more than neutral faces. Activity in fusiform face area increased with correct identification of T2 faces. Enhanced activity in rostral anterior cingulate cortex (rACC) accounted for the benefit in detection of fearful stimuli reflected in a significant interaction between target valence and correct identification. Thus, under conditions of limited attention resources activation in rACC correlated with enhanced processing of emotional stimuli. We suggest that these data support a model in which a prefrontal “gate” mechanism controls conscious access of emotional information under conditions of limited attentional resources

    Feature selection for automatic analysis of emotional response based on nonlinear speech modeling suitable for diagnosis of Alzheimer׳s disease

    Get PDF
    Alzheimer׳s disease (AD) is the most common type of dementia among the elderly. This work is part of a larger study that aims to identify novel technologies and biomarkers or features for the early detection of AD and its degree of severity. The diagnosis is made by analyzing several biomarkers and conducting a variety of tests (although only a post-mortem examination of the patients’ brain tissue is considered to provide definitive confirmation). Non-invasive intelligent diagnosis techniques would be a very valuable diagnostic aid. This paper concerns the Automatic Analysis of Emotional Response (AAER) in spontaneous speech based on classical and new emotional speech features: Emotional Temperature (ET) and fractal dimension (FD). This is a pre-clinical study aiming to validate tests and biomarkers for future diagnostic use. The method has the great advantage of being non-invasive, low cost, and without any side effects. The AAER shows very promising results for the definition of features useful in the early diagnosis of AD

    Recruitment of Language-, Emotion- and Speech-Timing Associated Brain Regions for Expressing Emotional Prosody: Investigation of Functional Neuroanatomy with fMRI

    Get PDF
    We aimed to progress understanding of prosodic emotion expression by establishing brain regions active when expressing specific emotions, those activated irrespective of the target emotion, and those whose activation intensity varied depending on individual performance. BOLD contrast data were acquired whilst participants spoke non-sense words in happy, angry or neutral tones, or performed jaw-movements. Emotion-specific analyses demonstrated that when expressing angry prosody, activated brain regions included the inferior frontal and superior temporal gyri, the insula, and the basal ganglia. When expressing happy prosody, the activated brain regions also included the superior temporal gyrus, insula, and basal ganglia, with additional activation in the anterior cingulate. Conjunction analysis confirmed that the superior temporal gyrus and basal ganglia were activated regardless of the specific emotion concerned. Nevertheless, disjunctive comparisons between the expression of angry and happy prosody established that anterior cingulate activity was significantly higher for angry prosody than for happy prosody production. Degree of inferior frontal gyrus activity correlated with the ability to express the target emotion through prosody. We conclude that expressing prosodic emotions (vs. neutral intonation) requires generic brain regions involved in comprehending numerous aspects of language, emotion-related processes such as experiencing emotions, and in the time-critical integration of speech information

    The Brain's Router: A Cortical Network Model of Serial Processing in the Primate Brain

    Get PDF
    The human brain efficiently solves certain operations such as object recognition and categorization through a massively parallel network of dedicated processors. However, human cognition also relies on the ability to perform an arbitrarily large set of tasks by flexibly recombining different processors into a novel chain. This flexibility comes at the cost of a severe slowing down and a seriality of operations (100–500 ms per step). A limit on parallel processing is demonstrated in experimental setups such as the psychological refractory period (PRP) and the attentional blink (AB) in which the processing of an element either significantly delays (PRP) or impedes conscious access (AB) of a second, rapidly presented element. Here we present a spiking-neuron implementation of a cognitive architecture where a large number of local parallel processors assemble together to produce goal-driven behavior. The precise mapping of incoming sensory stimuli onto motor representations relies on a “router” network capable of flexibly interconnecting processors and rapidly changing its configuration from one task to another. Simulations show that, when presented with dual-task stimuli, the network exhibits parallel processing at peripheral sensory levels, a memory buffer capable of keeping the result of sensory processing on hold, and a slow serial performance at the router stage, resulting in a performance bottleneck. The network captures the detailed dynamics of human behavior during dual-task-performance, including both mean RTs and RT distributions, and establishes concrete predictions on neuronal dynamics during dual-task experiments in humans and non-human primates

    Theoretical and computational modelling of attention and emotions in the brain

    Get PDF
    EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore