6 research outputs found

    Mexican Emotional Speech Database Based on Semantic, Frequency, Familiarity, Concreteness, and Cultural Shaping of Affective Prosody

    No full text
    In this paper, the Mexican Emotional Speech Database (MESD) that contains single-word emotional utterances for anger, disgust, fear, happiness, neutral and sadness with adult (male and female) and child voices is described. To validate the emotional prosody of the uttered words, a cubic Support Vector Machines classifier was trained on the basis of prosodic, spectral and voice quality features for each case study: (1) male adult, (2) female adult and (3) child. In addition, cultural, semantic, and linguistic shaping of emotional expression was assessed by statistical analysis. This study was registered at BioMed Central and is part of the implementation of a published study protocol. Mean emotional classification accuracies yielded 93.3%, 89.4% and 83.3% for male, female and child utterances respectively. Statistical analysis emphasized the shaping of emotional prosodies by semantic and linguistic features. A cultural variation in emotional expression was highlighted by comparing the MESD with the INTERFACE for Castilian Spanish database. The MESD provides reliable content for linguistic emotional prosody shaped by the Mexican cultural environment. In order to facilitate further investigations, a corpus controlled for linguistic features and emotional semantics, as well as one containing words repeated across voices and emotions are provided. The MESD is made freely available

    Mexican Emotional Speech Database Based on Semantic, Frequency, Familiarity, Concreteness, and Cultural Shaping of Affective Prosody

    No full text
    In this paper, the Mexican Emotional Speech Database (MESD) that contains single-word emotional utterances for anger, disgust, fear, happiness, neutral and sadness with adult (male and female) and child voices is described. To validate the emotional prosody of the uttered words, a cubic Support Vector Machines classifier was trained on the basis of prosodic, spectral and voice quality features for each case study: (1) male adult, (2) female adult and (3) child. In addition, cultural, semantic, and linguistic shaping of emotional expression was assessed by statistical analysis. This study was registered at BioMed Central and is part of the implementation of a published study protocol. Mean emotional classification accuracies yielded 93.3%, 89.4% and 83.3% for male, female and child utterances respectively. Statistical analysis emphasized the shaping of emotional prosodies by semantic and linguistic features. A cultural variation in emotional expression was highlighted by comparing the MESD with the INTERFACE for Castilian Spanish database. The MESD provides reliable content for linguistic emotional prosody shaped by the Mexican cultural environment. In order to facilitate further investigations, a corpus controlled for linguistic features and emotional semantics, as well as one containing words repeated across voices and emotions are provided. The MESD is made freely available

    Improved emotion differentiation under reduced acoustic variability of speech in autism

    No full text
    Abstract Background Socio-emotional impairments are among the diagnostic criteria for autism spectrum disorder (ASD), but the actual knowledge has substantiated both altered and intact emotional prosodies recognition. Here, a Bayesian framework of perception is considered suggesting that the oversampling of sensory evidence would impair perception within highly variable environments. However, reliable hierarchical structures for spectral and temporal cues would foster emotion discrimination by autistics. Methods Event-related spectral perturbations (ERSP) extracted from electroencephalographic (EEG) data indexed the perception of anger, disgust, fear, happiness, neutral, and sadness prosodies while listening to speech uttered by (a) human or (b) synthesized voices characterized by reduced volatility and variability of acoustic environments. The assessment of mechanisms for perception was extended to the visual domain by analyzing the behavioral accuracy within a non-social task in which dynamics of precision weighting between bottom-up evidence and top-down inferences were emphasized. Eighty children (mean 9.7 years old; standard deviation 1.8) volunteered including 40 autistics. The symptomatology was assessed at the time of the study via the Autism Diagnostic Observation Schedule, Second Edition, and parents’ responses on the Autism Spectrum Rating Scales. A mixed within-between analysis of variance was conducted to assess the effects of group (autism versus typical development), voice, emotions, and interaction between factors. A Bayesian analysis was implemented to quantify the evidence in favor of the null hypothesis in case of non-significance. Post hoc comparisons were corrected for multiple testing. Results Autistic children presented impaired emotion differentiation while listening to speech uttered by human voices, which was improved when the acoustic volatility and variability of voices were reduced. Divergent neural patterns were observed from neurotypicals to autistics, emphasizing different mechanisms for perception. Accordingly, behavioral measurements on the visual task were consistent with the over-precision ascribed to the environmental variability (sensory processing) that weakened performance. Unlike autistic children, neurotypicals could differentiate emotions induced by all voices. Conclusions This study outlines behavioral and neurophysiological mechanisms that underpin responses to sensory variability. Neurobiological insights into the processing of emotional prosodies emphasized the potential of acoustically modified emotional prosodies to improve emotion differentiation by autistics. Trial registration BioMed Central ISRCTN Registry, ISRCTN18117434. Registered on September 20, 2020

    Systematic Review: Emotion Recognition Based on Electrophysiological Patterns for Emotion Regulation Detection

    No full text
    The electrophysiological basis of emotion regulation (ER) has gained increased attention since efficient emotion recognition and ER allow humans to develop high emotional intelligence. However, no methodological standardization has been established yet. Therefore, this paper aims to provide a critical systematic review to identify experimental methodologies that evoke emotions and record, analyze and link electrophysiological signals with emotional experience by statistics and artificial intelligence, and lastly, define a clear application of assessing emotion processing. A total of 42 articles were selected after a search based on six scientific browsers: Web of Science, EBSCO, PubMed, Scopus, ProQuest and ScienceDirect during the first semester of 2020. Studies were included if (1) electrophysiological signals recorded on human subjects were correlated with emotional recognition and/or regulation; (2) statistical models, machine or deep learning methods based on electrophysiological signals were used to analyze data. Studies were excluded if they met one or more of the following criteria: (1) emotions were not described in terms of continuous dimensions (valence and arousal) or by discrete variables, (2) a control group or neutral state was not implemented, and (3) results were not obtained from a previous experimental paradigm that aimed to elicit emotions. There was no distinction in the selection whether the participants presented a pathological or non-pathological condition, but the condition of subjects must have been efficiently detailed for the study to be included. The risk of bias was limited by extracting and organizing information on spreadsheets and participating in discussions between the authors. However, the data size selection, such as the sample size, was not considered, leading to bias in the validity of the analysis. This systematic review is presented as a consulting source to accelerate the development of neuroengineering-based systems to regulate the trajectory of emotional experiences early on

    Towards the neurophysiological characterization of immersive attributes in auditory events

    No full text
    Three-dimensional audio technology has been expanded to cognitive and sensory approaches in entertainment and health since it augments the perception of space. Still, rigorous methodologies for the evaluation of immersive experience are needed. This work aimed to study the relationship between neurophysiological and behavioral responses to concert hall and urban park soundscapes in 3D audio and stereo downmix formats, and to highlight considerations for the measurement of the immersive experience. Brain, cardiac, and immersive-related responses from two samples of 24 subjects were analyzed. Results showed that the theta/beta neural band ratio from prefrontal-frontal and temporal-central regions showed a consistent tendency to spatial presence median scores, while the parietal-occipital region showed an inverse trend. Still, theta/beta ratio could not model the behavior of spatial presence scores in any condition, contrary to previous findings. Only the mean heart rate variability could model the behavior of involvement scores while listening to the concert hall performance in stereo downmix, but not for arousal or other soundscapes and formats. Lastly, considerations on the neurophysiological and behavioral assessment of immersive experience are given for future work.</p

    Perception of task-irrelevant affective prosody by typically developed and diagnosed children with Autism Spectrum Disorder under attentional loads: electroencephalographic and behavioural data

    No full text
    The relevance of affective information triggers cognitive prioritisation, dictated by both the attentional load of the relevant task, and socio-emotional abilities. This dataset provides electroencephalographic (EEG) signals related to implicit emotional speech perception under low, intermediate, and high attentional demands. Demographic and behavioural data are also provided. Specific social-emotional reciprocity and verbal communication characterise Autism Spectrum Disorder (ASD) and may influence the processing of affective prosodies. Therefore, 62 children and their parents or legal guardians participated in data collection, including 31 children with high autistic traits (x̄age=9.6-year-old, σage=1.5) who previously received a diagnosis of ASD by a medical specialist, and 31 typically developed children (x̄age=10.2-year-old, σage=1.2). Assessments of the scope of autistic behaviours using the Autism Spectrum Rating Scales (ASRS, parent report) are provided for every child. During the experiment, children listened to task-irrelevant affective prosodies (anger, disgust, fear, happiness, neutral and sadness) while answering three visual tasks: neutral image viewing (low attentional load), one-target 4-disc Multiple Object Tracking (MOT; intermediate), one-target 8-disc MOT (high). The EEG data recorded during all three tasks and the tracking capacity (behavioural data) from MOT conditions are included in the dataset. Particularly, the tracking capacity was computed as a standardised index of attentional abilities during MOT, corrected for guessing. Beforehand, children answered the Edinburgh Handedness Inventory, and resting-state EEG activity of children was recorded for 2 minutes with eyes open. Those data are also provided. The present dataset can be used to investigate the electrophysiological correlates of implicit emotion and speech perceptions and their interaction with attentional load and autistic traits. Besides, resting-state EEG data may be used to characterise inter-individual heterogeneity at rest and, in turn, associate it with attentional capacities during MOT and with autistic behavioural patterns. Finally, tracking capacity may be useful to explore dynamic and selective attentional mechanisms under emotional constraints
    corecore