4,435 research outputs found

    The Effects of Parental Behavior on Infants' Neural Processing of Emotion Expressions

    Get PDF
    Infants become sensitive to emotion expressions early in the 1st year and such sensitivity is likely crucial for social development and adaptation. Social interactions with primary caregivers may play a key role in the development of this complex ability. This study aimed to investigate how variations in parenting behavior affect infants' neural responses to emotional faces. Event-related potentials (ERPs) to emotional faces were recorded from 40 healthy 7-month-old infants (24 males). Parental behavior was assessed and coded using the Emotional Availability Scales during free-play interaction. Sensitive parenting was associated with increased amplitudes to positive facial expressions on the face-sensitive ERP component, the negative central. Findings are discussed in relation to the interactive mechanisms influencing how infants neurally encode positive emotions

    Cracking the code of oscillatory activity

    Get PDF
    Neural oscillations are ubiquitous measurements of cognitive processes and dynamic routing and gating of information. The fundamental and so far unresolved problem for neuroscience remains to understand how oscillatory activity in the brain codes information for human cognition. In a biologically relevant cognitive task, we instructed six human observers to categorize facial expressions of emotion while we measured the observers' EEG. We combined state-of-the-art stimulus control with statistical information theory analysis to quantify how the three parameters of oscillations (i.e., power, phase, and frequency) code the visual information relevant for behavior in a cognitive task. We make three points: First, we demonstrate that phase codes considerably more information (2.4 times) relating to the cognitive task than power. Second, we show that the conjunction of power and phase coding reflects detailed visual features relevant for behavioral response-that is, features of facial expressions predicted by behavior. Third, we demonstrate, in analogy to communication technology, that oscillatory frequencies in the brain multiplex the coding of visual features, increasing coding capacity. Together, our findings about the fundamental coding properties of neural oscillations will redirect the research agenda in neuroscience by establishing the differential role of frequency, phase, and amplitude in coding behaviorally relevant information in the brai

    NĂ€gemistaju automaatsete protsesside eksperimentaalne uurimine

    Get PDF
    VĂ€itekirja elektrooniline versioon ei sisalda publikatsiooneVĂ€itekiri keskendub nĂ€gemistaju protsesside eksperimentaalsele uurimisele, mis on suuremal vĂ”i vĂ€hemal mÀÀral automaatsed. Uurimistöös on kasutatud erinevaid eksperimentaalseid katseparadigmasid ja katsestiimuleid ning nii kĂ€itumuslikke- kui ka ajukuvamismeetodeid. Esimesed kolm empiirilist uurimust kĂ€sitlevad liikumisinformatsiooni töötlust, mis on evolutsiooni kĂ€igus kujunenud ĂŒheks olulisemaks baasprotsessiks nĂ€gemistajus. Esmalt huvitas meid, kuidas avastatakse liikuva objekti suunamuutusi, kui samal ajal toimub ka taustal liikumine (Uurimus I). NĂ€gemistaju uurijad on pikka aega arvanud, et liikumist arvutatakse alati mĂ”ne vĂ€lise objekti vĂ”i tausta suhtes. Meie uurimistulemused ei kinnitanud taolise suhtelise liikumise printsiibi paikapidavust ning toetavad pigem seisukohta, et eesmĂ€rkobjekti liikumisinformatsiooni töötlus on automaatne protsess, mis tuvastab silma pĂ”hjas toimuvaid nihkeid, ja taustal toimuv seda eriti ei mĂ”juta. Teise uurimuse tulemused (Uurimus II) nĂ€itasid, et nĂ€gemissĂŒsteem töötleb vĂ€ga edukalt ka seda liikumisinformatsiooni, millele vaatleja teadlikult tĂ€helepanu ei pööra. See tĂ€hendab, et samal ajal, kui inimene on mĂ”ne tĂ€helepanu hĂ”lmava tegevusega ametis, suudab tema aju taustal toimuvaid sĂŒndmusi automaatselt registreerida. IgapĂ€evaselt on inimese nĂ€gemisvĂ€ljas alati palju erinevaid objekte, millel on erinevad omadused, mistĂ”ttu jĂ€rgmiseks huvitas meid (Uurimus III), kuidas ĂŒhe tunnuse (antud juhul vĂ€rvimuutuse) töötlemist mĂ”jutab mĂ”ne teise tunnusega toimuv (antud juhul liikumiskiiruse) muutus. NĂ€itasime, et objekti liikumine parandas sama objekti vĂ€rvimuutuse avastamist, mis viitab, et nende kahe omaduse töötlemine ajus ei ole pĂ€ris eraldiseisev protsess. Samuti tĂ€hendab taoline tulemus, et hoolimata ĂŒhele tunnusele keskendumisest ei suuda inimene ignoreerida teist tĂ€helepanu tĂ”mbavat tunnust (liikumine), mis viitab taas kord automaatsetele töötlusprotsessidele. Neljas uurimus keskendus emotsionaalsete nĂ€ovĂ€ljenduste töötlusele, kuna need kannavad keskkonnas hakkamasaamiseks vajalikke sotsiaalseid signaale, mistĂ”ttu on alust arvata, et nende töötlus on kujunenud suuresti automaatseks protsessiks. NĂ€itasime, et emotsiooni vĂ€ljendavaid nĂ€gusid avastati kiiremini ja kergemini kui neutraalse ilmega nĂ€gusid ning et vihane nĂ€gu tĂ”mbas rohkem tĂ€helepanu kui rÔÔmus (Uurimus IV). VĂ€itekirja viimane osa puudutab visuaalset lahknevusnegatiivsust (ingl Visual Mismatch Negativity ehk vMMN), mis nĂ€itab aju vĂ”imet avastada automaatselt erinevusi enda loodud mudelist ĂŒmbritseva keskkonna kohta. Selle automaatse erinevuse avastamise mehhanismi uurimisse andsid oma panuse nii Uurimus II kui Uurimus IV, mis mĂ”lemad pakuvad vĂ€lja tĂ”endusi vMMN tekkimise kohta eri tingimustel ja katseparadigmades ning ka vajalikke metodoloogilisi tĂ€iendusi. Uurimus V on esimene kogu siiani ilmunud temaatilist teadustööd hĂ”lmav ĂŒlevaateartikkel ja metaanalĂŒĂŒs visuaalsest lahknevusnegatiivsusest psĂŒhhiaatriliste ja neuroloogiliste haiguste korral, mis panustab oluliselt visuaalse lahknevusnegatiivsuse valdkonna arengusse.The research presented and discussed in the thesis is an experimental exploration of processes in visual perception, which all display a considerable amount of automaticity. These processes are targeted from different angles using different experimental paradigms and stimuli, and by measuring both behavioural and brain responses. In the first three empirical studies, the focus is on motion detection that is regarded one of the most basic processes shaped by evolution. Study I investigated how motion information of an object is processed in the presence of background motion. Although it is widely believed that no motion can be perceived without establishing a frame of reference with other objects or motion on the background, our results found no support for relative motion principle. This finding speaks in favour of a simple and automatic process of detecting motion, which is largely insensitive to the surrounding context. Study II shows that the visual system is built to automatically process motion information that is outside of our attentional focus. This means that even if we are concentrating on some task, our brain constantly monitors the surrounding environment. Study III addressed the question of what happens when multiple stimulus qualities (motion and colour) are present and varied, which is the everyday reality of our visual input. We showed that velocity facilitated the detection of colour changes, which suggests that processing motion and colour is not entirely isolated. These results also indicate that it is hard to ignore motion information, and processing it is rather automatically initiated. The fourth empirical study focusses on another example of visual input that is processed in a rather automatic way and carries high survival value – emotional expressions. In Study IV, participants detected emotional facial expressions faster and more easily compared with neutral facial expressions, with a tendency towards more automatic attention to angry faces. In addition, we investigated the emergence of visual mismatch negativity (vMMN) that is one of the most objective and efficient methods for analysing automatic processes in the brain. Study II and Study IV proposed several methodological gains for registering this automatic change-detection mechanism. Study V is an important contribution to the vMMN research field as it is the first comprehensive review and meta-analysis of the vMMN studies in psychiatric and neurological disorders

    Social and emotional processing in Prader-Willi syndrome: genetic subtype differences

    Get PDF
    BACKGROUND: People with Prader-Willi syndrome (PWS) demonstrate social dysfunction and increased risk of autism spectrum disorder, especially those with the maternal uniparental disomy (mUPD) versus paternal deletion genetic subtype. This study compared the neural processing of social (faces) and nonsocial stimuli, varying in emotional valence, across genetic subtypes in 24 adolescents and adults with PWS. METHODS: Upright and inverted faces, and nonsocial objects with positive and negative emotional valence were presented to participants with PWS in an oddball paradigm with smiling faces serving as targets. Behavioral and event-related potential (ERP) data were recorded. RESULTS: There were no genetic subtype group differences in accuracy, and all participants performed above chance level. ERP responses revealed genetic subtype differences in face versus object processing. In those with deletions, the face-specific posterior N170 response varied in size for face stimuli versus inverted faces versus nonsocial objects. Persons with mUPD generated N170 of smaller amplitude and showed no stimulus differentiation. Brain responses to emotional content did not vary by subtype. All participants elicited larger posterior and anterior late positive potential responses to positive objects than to negative objects. Emotion-related differences in response to faces were limited to inverted faces only in the form of larger anterior late positive potential amplitudes to negative emotions over the right hemisphere. Detection of the target smiling faces was evident in the increased amplitude of the frontal and central P3 responses but only for inverted smiling faces. CONCLUSION: Persons with the mUPD subtype of PWS may show atypical face versus object processes, yet both subtypes demonstrated potentially altered processing, attention to and/or recognition of faces and their expressions

    Contextual information resolves uncertainty about ambiguous facial emotions: Behavioral and magnetoencephalographic correlates

    Get PDF
    We are grateful to Karin Wilken for her assistance in data collection.Environmental conditions bias our perception of other peoples’ facial emotions. This becomes quite relevant in potentially threatening situations, when a fellow’s facial expression might indicate potential danger. The present study tested the prediction that a threatening environment biases the recognition of facial emotions. To this end, low- and medium-expressive happy and fearful faces (morphed to 10%, 20%, 30%, or 40% emotional) were presented within a context of instructed threat-of-shock or safety. Self-reported data revealed that instructed threat led to a biased recognition of fearful, but not happy facial expressions. Magnetoencephalographic correlates revealed spatio-temporal clusters of neural network activity associated with emotion recognition and contextual threat/safety in early to mid-latency time intervals in the left parietal cortex, bilateral prefrontal cortex, and the left temporal pole regions. Early parietal activity revealed a double dissociation of face–context information as a function of the expressive level of facial emotions: When facial expressions were difficult to recognize (lowexpressive), contextual threat enhanced fear processing and contextual safety enhanced processing of subtle happy faces. However, for rather easily recognizable faces (medium-expressive) the left hemisphere (parietal cortex, PFC, and temporal pole) showed enhanced activity to happy faces during contextual threat and fearful faces during safety. Thus, contextual settings reduce the salience threshold and boost early face processing of lowexpressive congruent facial emotions, whereas face-context incongruity or mismatch effects drive neural activity of easier recognizable facial emotions. These results elucidate how environmental settings help recognize facial emotions, and the brain mechanisms underlying the recognition of subtle nuances of fear.German Research Foundation (DFG) BU 3255/1-1 Ju2/024/15 SF58C0

    Brain Computer Interfaces and Emotional Involvement: Theory, Research, and Applications

    Get PDF
    This reprint is dedicated to the study of brain activity related to emotional and attentional involvement as measured by Brain–computer interface (BCI) systems designed for different purposes. A BCI system can translate brain signals (e.g., electric or hemodynamic brain activity indicators) into a command to execute an action in the BCI application (e.g., a wheelchair, the cursor on the screen, a spelling device or a game). These tools have the advantage of having real-time access to the ongoing brain activity of the individual, which can provide insight into the user’s emotional and attentional states by training a classification algorithm to recognize mental states. The success of BCI systems in contemporary neuroscientific research relies on the fact that they allow one to “think outside the lab”. The integration of technological solutions, artificial intelligence and cognitive science allowed and will allow researchers to envision more and more applications for the future. The clinical and everyday uses are described with the aim to invite readers to open their minds to imagine potential further developments

    Facing facts : neuronal mechanisms of face perception

    Get PDF
    The face is one of the most important stimuli carrying social meaning. Thanks to the fast analysis of faces, we are able to judge physical attractiveness and features of their owners’ personality, intentions, and mood. From one’s facial expression we can gain information about danger present in the environment. It is obvious that the ability to process efficiently one’s face is crucial for survival. Therefore, it seems natural that in the human brain there exist structures specialized for face processing. In this article, we present recent findings from studies on the neuronal mechanisms of face perception and recognition in the light of current theoretical models. Results from brain imaging (fMRI, PET) and electrophysiology (ERP, MEG) show that in face perception particular regions (i.e. FFA, STS, IOA, AMTG, prefrontal and orbitofrontal cortex) are involved. These results are confirmed by behavioral data and clinical observations as well as by animal studies. The developmental findings reviewed in this article lead us to suppose that the ability to analyze face-like stimuli is hard-wired and improves during development. Still, experience with faces is not sufficient for an individual to become an expert in face perception. This thesis is supported by the investigation of individuals with developmental disabilities, especially with autistic spectrum disorders (ASD)

    Spatiotemporal dynamics of covert versus overt processing of happy, fearful and sad facial expressions

    Get PDF
    Behavioral and electrophysiological correlates of the influence of task demands on the processing of happy, sad, and fearful expressions were investigated in a within-subjects study that compared a perceptual distraction condition with task-irrelevant faces (e.g., covert emotion task) to an emotion task-relevant categorization condition (e.g., overt emotion task). A state-of-the-art non-parametric mass univariate analysis method was used to address the limitations of previous studies. Behaviorally, participants responded faster to overtly categorized happy faces and were slower and less accurate to categorize sad and fearful faces; there were no behavioral differences in the covert task. Event-related potential (ERP) responses to the emotional expressions included the N170 (140-180 ms), which was enhanced by emotion irrespective of task, with happy and sad expressions eliciting greater amplitudes than neutral expressions. EPN (200-400 ms) amplitude was modulated by task, with greater voltages in the overt condition, and by emotion, however, there was no interaction of emotion and task. ERP activity was modulated by emotion as a function of task only at a late processing stage, which included the LPP (500-800 ms), with fearful and sad faces showing greater amplitude enhancements than happy faces. This study reveals that affective content does not necessarily require attention in the early stages of face processing, supporting recent evidence that the core and extended parts of the face processing system act in parallel, rather than serially. The role of voluntary attention starts at an intermediate stage, and fully modulates the response to emotional content in the final stage of processing

    Investigating Age-Related Neural Compensation During Emotion Perception Using Electroencephalography

    Get PDF
    Previous research suggests declines in emotion perception in older as compared to younger adults, but the underlying neural mechanisms remain unclear. Here, we address this by investigating how “face-age” and “face emotion intensity” affect both younger and older participants’ behavioural and neural responses using event-related potentials (ERPs). Sixteen young and fifteen older adults viewed and judged the emotion type of facial images with old or young face-age and with high- or low- emotion intensities while EEG was recorded. The ERP results revealed that young and older participants exhibited significant ERP differences in two neural clusters: the left frontal and centromedial regions (100–200 ms stimulus onset) and frontal region (250–900 ms) when perceiving neutral faces. Older participants also exhibited significantly higher ERPs within these two neural clusters during anger and happiness emotion perceptual tasks. However, while this pattern of activity supported neutral emotion processing, it was not sufficient to support the effective processing of facial expressions of anger and happiness as older adults showed reductions in performance when perceiving these emotions. These age-related changes are consistent with theoretical models of age-related changes in neurocognitive abilities and may reflect a general age-related cognitive neural compensation in older adults, rather than a specific emotion-processing neural compensation
    • 

    corecore