5 research outputs found

    Spatiotemporal neural network dynamics for the processing of dynamic facial expressions.

    Get PDF
    èĄšæƒ…ă‚’ć‡Šç†ă™ă‚‹ç„žç”ŒăƒăƒƒăƒˆăƒŻăƒŒă‚Żăźæ™‚ç©șé–“ăƒ€ă‚€ăƒŠăƒŸă‚Żă‚čă‚’è§Łæ˜Ž. äșŹéƒœć€§ć­Šăƒ—ăƒŹă‚čăƒȘăƒȘăƒŒă‚č. 2015-07-24.The dynamic facial expressions of emotion automatically elicit multifaceted psychological activities; however, the temporal profiles and dynamic interaction patterns of brain activities remain unknown. We investigated these issues using magnetoencephalography. Participants passively observed dynamic facial expressions of fear and happiness, or dynamic mosaics. Source-reconstruction analyses utilizing functional magnetic-resonance imaging data revealed higher activation in broad regions of the bilateral occipital and temporal cortices in response to dynamic facial expressions than in response to dynamic mosaics at 150-200 ms and some later time points. The right inferior frontal gyrus exhibited higher activity for dynamic faces versus mosaics at 300-350 ms. Dynamic causal-modeling analyses revealed that dynamic faces activated the dual visual routes and visual-motor route. Superior influences of feedforward and feedback connections were identified before and after 200 ms, respectively. These results indicate that hierarchical, bidirectional neural network dynamics within a few hundred milliseconds implement the processing of dynamic facial expressions

    Widespread and lateralized social brain activity for processing dynamic facial expressions

    Get PDF
    Dynamic facial expressions of emotions constitute natural and powerful means of social communication in daily life. A number of previous neuroimaging studies have explored the neural mechanisms underlying the processing of dynamic facial expressions, and indicated the activation of certain social brain regions (e.g., the amygdala) during such tasks. However, the activated brain regions were inconsistent across studies, and their laterality was rarely evaluated. To investigate these issues, we measured brain activity using functional magnetic resonance imaging in a relatively large sample (n = 51) during the observation of dynamic facial expressions of anger and happiness and their corresponding dynamic mosaic images. The observation of dynamic facial expressions, compared with dynamic mosaics, elicited stronger activity in the bilateral posterior cortices, including the inferior occipital gyri, fusiform gyri, and superior temporal sulci. The dynamic facial expressions also activated bilateral limbic regions, including the amygdalae and ventromedial prefrontal cortices, more strongly versus mosaics. In the same manner, activation was found in the right inferior frontal gyrus (IFG) and left cerebellum. Laterality analyses comparing original and flipped images revealed right hemispheric dominance in the superior temporal sulcus and IFG and left hemispheric dominance in the cerebellum. These results indicated that the neural mechanisms underlying processing of dynamic facial expressions include widespread social brain regions associated with perceptual, emotional, and motor functions, and include a clearly lateralized (right cortical and left cerebellar) network like that involved in language processing

    Towards Computer-Assisted Regulation of Emotions

    Get PDF
    Tunteet ovat keskeinen ja erottamaton osa ihmisen toimintaa, ajattelua ja yksilöiden vÀlistÀ vuorovaikutusta. Tunteet luovat perustan mielekkÀÀlle, toimivalle ja tehokkaalle toiminnalle. Joskus tunteiden sÀvy tai voimakkuus voi kuitenkin olla epÀedullinen henkilön tavoitteiden ja hyvinvoinnin kannalta. TÀllöin taidokas tunteiden sÀÀtely voi auttaa saavuttamaan terveen ja menestyksellisen elÀmÀn. VÀitöstyön tavoitteena oli muodostaa perusta tulevaisuuden tietokoneille, jotka auttavat sÀÀtelemÀÀn tunteita. Tietokoneiden tunneÀlyÀ on toistaiseksi kehitetty kahdella alueella: ihmisen tunnereaktioiden mittaamisessa ja tietokoneen tuottamissa tunneilmaisuissa. ViimeisimmÀt teknologiat antavat tietokoneille jo mahdollisuuden tunnistaa ja jÀljitellÀ ihmisen tunneilmaisuja hyvinkin tarkasti. VÀitöstyössÀ toimistotuoliin asennetuilla paineantureilla kyettiin huomaamattomasti havaitsemaan muutoksia kehon liikkeissÀ: osallistujat nojautuivat kohti heille esitettyjÀ tietokonehahmoja. Tietokonehahmojen esittÀmÀt kasvonilmeet ja kehollinen etÀisyys vaikuttivat merkittÀvÀsti osallistujien tunne- ja tarkkaavaisuuskokemuksiin sekÀ sydÀmen, ihon hikirauhasten ja kasvon lihasten toimintaan. Tulokset osoittavat ettÀ keinotekoiset tunneilmaisut voivat olla tehokkaita henkilön kokemusten ja kehon toiminnan sÀÀtelyssÀ. VÀitöstyössÀ laadittiin lopulta vuorovaikutteinen asetelma, jossa tunneilmaisujen automaattinen tarkkailu liitettiin tietokoneen tuottamien sosiaalisten ilmaisujen ohjaamiseen. Osallistujat pystyivÀt sÀÀtelemÀÀn vÀlittömiÀ fysiologisia reaktioitaan ja tunnekokemuksiaan esittÀmÀllÀ tahdonalaisia kasvonilmeitÀ (mm. ikÀÀn kuin hymyilemÀllÀ) heitÀ lÀhestyvÀlle tietokonehahmolle. VÀitöstyön tuloksia voidaan hyödyntÀÀ laajasti, muun muassa uudenlaisten, ihmisen luonnollisia vuorovaikutustapoja paremmin tukevien tietokoneiden suunnittelussa.Emotions are intimately connected with our lives. They are essential in motivating behaviour, for reasoning effectively, and in facilitating interactions with other people. Consequently, the ability to regulate the tone and intensity of emotions is important for leading a life of success and well-being. Intelligent computer perception of human emotions and effective expression of virtual emotions provide a basis for assisting emotion regulation with technology. State-of-the-art technologies already allow computers to recognize and imitate human social and emotional cues accurately and in great detail. For example, in the present work a regular looking office chair was used to covertly measure human body movement responses to artifical expressions of proximity and facial cues. In general, such artificial cues from visual agents were found to significantly affect heart, sweat gland, and facial muscle activities, as well as subjective experiences of emotion and attention. The perceptual and expressive capabilities were combined in a setup where a person regulated her or his more spontaneous reactions by either smiling or frowning voluntarily to a virtual humanlike character. These results highlight the potential of future emotion-sensitive technologies for creating supportive and even healthy interactions between humans and computers

    Sex Differences in Orienting to Pictures with and without Humans:Evidence from the Cardiac Evoked Response (ECR) and the Cortical Long Latency Parietal Positivity (LPP)

    Get PDF
    OBJECTIVE: This study investigated the effect of social relevance in affective pictures on two orienting responses, i.e. the evoked cardiac response (ECR), and a long latency cortical evoked potential (LPP) and whether this effect would differ between males and females. Assuming that orienting to affective social information is fundamental to experiencing affective empathy, associations between self-report measures of empathy and the two orienting responses were investigated. METHOD: ECRs were obtained from 34 female and 30 male students, and LPPs from 25 female and 27 male students viewing 414 pictures from the International Affective Picture System. Pictures portrayed pleasant, unpleasant and neutral scenes with and without humans. RESULTS: Both the ECR and LPP showed the largest response to pictures with humans in unpleasant situations. For both measures, the responses to pictures with humans correlated with self-report measures of empathy. While we found a greater male than female responsiveness to the pictures without humans in the ECR, a greater female than male responsiveness was observed in the LPP response to pictures with humans. CONCLUSION AND SIGNIFICANCE: The sensitivity of these orienting responses to social relevance and their differential contribution to the prediction of individual differences underline the validity of their combined use in clinical studies investigating individuals with social disabilities

    Naturalistic Emotional Speech Corpora with Large Scale Emotional Dimension Ratings

    Get PDF
    The investigation of the emotional dimensions of speech is dependent on large sets of reliable data. Existing work has been carried out on the creation of emotional speech corpora and the acoustic analysis of emotional speech and this research seeks to buildupon this work while suggesting new methods and areas of potential. A review of the literature determined that a two dimensional emotional model of activation and evaluation was the ideal method for representing the emotional states expressed inspeech. Two case studies were carried out to investigate methods of obtaining naturalunderlying emotional speech in a high quality audio environment, the results of which were used to design a final experimental procedure to elicit natural underlying emotional speech. The speech obtained in this experiment was used in the creation ofa speech corpus that was underpinned by a persistent backend database that incorporated a three-tiered annotation methodology. This methodology was used to comprehensively annotate the metadata, acoustic data and emotional data of the recorded speech. Structuring the three levels of annotation and the assets in a persistent backend database allowed interactive web-based tools to be developed; aweb-based listening tool was developed to obtain a large amount of ratings for the assets that were then written back to the database for analysis. Once a large amount of ratings had been obtained, statistical analysis was used to determine the dimensionalrating for each asset. Acoustic analysis of the underlying emotional speech was then carried out and determined that certain acoustic parameters were correlated with the activation dimension of the dimensional model. This substantiated some of thefindings in the literature review and further determined that spectral energy was strongly correlated with the activation dimension in relation to underlying emotional speech. The lack of a correlation for certain acoustic parameters in relation to the evaluation dimension was also determined, again substantiating some of the findings in the literature.The work contained in this thesis makes a number of contributions to the field: the development of an experimental design to elicit natural underlying emotional speech in a high quality audio environment; the development and implementation of acomprehensive three-tiered corpus annotation methodology; the development and implementation of large scale web based listening tests to rate the emotional dimensions of emotional speech; the determination that certain acoustic parameters are correlated with the activation dimension of a dimensional emotional model inrelation to natural underlying emotional speech and the determination that certain acoustic parameters are not correlated with the evaluation dimension of a twodimensional emotional model in relation to natural underlying emotional speech
    corecore