15,979 research outputs found

    Does comorbid anxiety counteract emotion recognition deficits in conduct disorder?

    Get PDF
    Background: Previous research has reported altered emotion recognition in both conduct disorder (CD) and anxiety disorders (ADs) - but these effects appear to be of different kinds. Adolescents with CD often show a generalised pattern of deficits, while those with ADs show hypersensitivity to specific negative emotions. Although these conditions often cooccur, little is known regarding emotion recognition performance in comorbid CD+ADs. Here, we test the hypothesis that in the comorbid case, anxiety-related emotion hypersensitivity counteracts the emotion recognition deficits typically observed in CD. Method: We compared facial emotion recognition across four groups of adolescents aged 12-18 years: those with CD alone (n = 28), ADs alone (n = 23), cooccurring CD+ADs (n = 20) and typically developing controls (n = 28). The emotion recognition task we used systematically manipulated the emotional intensity of facial expressions as well as fixation location (eye, nose or mouth region). Results: Conduct disorder was associated with a generalised impairment in emotion recognition; however, this may have been modulated by group differences in IQ. AD was associated with increased sensitivity to low-intensity happiness, disgust and sadness. In general, the comorbid CD+ADs group performed similarly to typically developing controls. Conclusions: Although CD alone was associated with emotion recognition impairments, ADs and comorbid CD+ADs were associated with normal or enhanced emotion recognition performance. The presence of comorbid ADs appeared to counteract the effects of CD, suggesting a potentially protective role, although future research should examine the contribution of IQ and gender to these effects

    Viewing the personality traits through a cerebellar lens. A focus on the constructs of novelty seeking, harm avoidance, and alexithymia

    Get PDF
    The variance in the range of personality trait expression appears to be linked to structural variance in specific brain regions. In evidencing associations between personality factors and neurobiological measures, it seems evident that the cerebellum has not been up to now thought as having a key role in personality. This paper will review the most recent structural and functional neuroimaging literature that engages the cerebellum in personality traits, as novelty seeking and harm avoidance, and it will discuss the findings in the context of contemporary theories of affective and cognitive cerebellar function. By using region of interest (ROI)- and voxel-based approaches, we recently evidenced that the cerebellar volumes correlate positively with novelty seeking scores and negatively with harm avoidance scores. Subjects who search for new situations as a novelty seeker does (and a harm avoiding does not do) show a different engagement of their cerebellar circuitries in order to rapidly adapt to changing environments. The emerging model of cerebellar functionality may explain how the cerebellar abilities in planning, controlling, and putting into action the behavior are associated to normal or abnormal personality constructs. In this framework, it is worth reporting that increased cerebellar volumes are even associated with high scores in alexithymia, construct of personality characterized by impairment in cognitive, emotional, and affective processing. On such a basis, it seems necessary to go over the traditional cortico-centric view of personality constructs and to address the function of the cerebellar system in sustaining aspects of motivational network that characterizes the different temperamental trait

    A knowledge-driven vowel-based approach of depression classification from speech using data augmentation

    Full text link
    We propose a novel explainable machine learning (ML) model that identifies depression from speech, by modeling the temporal dependencies across utterances and utilizing the spectrotemporal information at the vowel level. Our method first models the variable-length utterances at the local-level into a fixed-size vowel-based embedding using a convolutional neural network with a spatial pyramid pooling layer ("vowel CNN"). Following that, the depression is classified at the global-level from a group of vowel CNN embeddings that serve as the input of another 1D CNN ("depression CNN"). Different data augmentation methods are designed for both the training of vowel CNN and depression CNN. We investigate the performance of the proposed system at various temporal granularities when modeling short, medium, and long analysis windows, corresponding to 10, 21, and 42 utterances, respectively. The proposed method reaches comparable performance with previous state-of-the-art approaches and depicts explainable properties with respect to the depression outcome. The findings from this work may benefit clinicians by providing additional intuitions during joint human-ML decision-making tasks

    Summaries of plenary, symposia, and oral sessions at the XXII World Congress of Psychiatric Genetics, Copenhagen, Denmark, 12-16 October 2014

    Get PDF
    The XXII World Congress of Psychiatric Genetics, sponsored by the International Society of Psychiatric Genetics, took place in Copenhagen, Denmark, on 12-16 October 2014. A total of 883 participants gathered to discuss the latest findings in the field. The following report was written by student and postdoctoral attendees. Each was assigned one or more sessions as a rapporteur. This manuscript represents topics covered in most, but not all of the oral presentations during the conference, and contains some of the major notable new findings reported

    Voice Analysis for Stress Detection and Application in Virtual Reality to Improve Public Speaking in Real-time: A Review

    Full text link
    Stress during public speaking is common and adversely affects performance and self-confidence. Extensive research has been carried out to develop various models to recognize emotional states. However, minimal research has been conducted to detect stress during public speaking in real time using voice analysis. In this context, the current review showed that the application of algorithms was not properly explored and helped identify the main obstacles in creating a suitable testing environment while accounting for current complexities and limitations. In this paper, we present our main idea and propose a stress detection computational algorithmic model that could be integrated into a Virtual Reality (VR) application to create an intelligent virtual audience for improving public speaking skills. The developed model, when integrated with VR, will be able to detect excessive stress in real time by analysing voice features correlated to physiological parameters indicative of stress and help users gradually control excessive stress and improve public speaking performanceComment: 41 pages, 7 figures, 4 table
    • …
    corecore