12 research outputs found

    Fear Modulates Visual Awareness Similarly for Facial and Bodily Expressions

    Get PDF
    Background: Social interaction depends on a multitude of signals carrying information about the emotional state of others. But the relative importance of facial and bodily signals is still poorly understood. Past research has focused on the perception of facial expressions while perception of whole body signals has only been studied recently. In order to better understand the relative contribution of affective signals from the face only or from the whole body we performed two experiments using binocular rivalry. This method seems to be perfectly suitable to contrast two classes of stimuli to test our processing sensitivity to either stimulus and to address the question how emotion modulates this sensitivity. Method: In the first experiment we directly contrasted fearful, angry, and neutral bodies and faces. We always presented bodies in one eye and faces in the other simultaneously for 60 s and asked participants to report what they perceived. In the second experiment we focused specifically on the role of fearful expressions of faces and bodies. Results: Taken together the two experiments show that there is no clear bias toward either the face or body when the expression of the body and face are neutral or angry. However, the perceptual dominance in favor of either the face of the body is a function of the stimulus class expressing fear

    Emotional Voice and Emotional Body Postures Influence Each Other Independently of Visual Awareness

    Get PDF
    Multisensory integration may occur independently of visual attention as previously shown with compound face-voice stimuli. We investigated in two experiments whether the perception of whole body expressions and the perception of voices influence each other when observers are not aware of seeing the bodily expression. In the first experiment participants categorized masked happy and angry bodily expressions while ignoring congruent or incongruent emotional voices. The onset between target and mask varied from −50 to +133 ms. Results show that the congruency between the emotion in the voice and the bodily expressions influences audiovisual perception independently of the visibility of the stimuli. In the second experiment participants categorized the emotional voices combined with masked bodily expressions as fearful or happy. This experiment showed that bodily expressions presented outside visual awareness still influence prosody perception. Our experiments show that audiovisual integration between bodily expressions and affective prosody can take place outside and independent of visual awareness

    A Computational Feedforward Model Predicts Categorization of Masked Emotional Body Language for Longer, but Not for Shorter, Latencies

    No full text
    Given the presence of massive feedback loops in brain networks, it is difficult to disentangle the contribution of feedforward and feedback processing to the recognition of visual stimuli, in this case, of emotional body expressions. The aim of the work presented in this letter is to shed light on how well feedforward processing explains rapid categorization of this important class of stimuli. By means of parametric masking, it may be possible to control the contribution of feedback activity in human participants. A close comparison is presented between human recognition performance and the performance of a computational neural model that exclusively modeled feedforward processing and was engineered to fulfill the computational requirements of recognition. Results show that the longer the stimulus onset asynchrony (SOA), the closer the performance of the human participants was to the values predicted by the model, with an optimum at an SOA of 100 ms. At short SOA latencies, human performance deteriorated, but the categorization of the emotional expressions was still above baseline. The data suggest that, although theoretically, feedback arising from inferotemporal cortex is likely to be blocked when the SOA is 100 ms, human participants still seem to rely on more local visual feedback processing to equal the model's performance

    Virtual lesion of right posterior superior temporal sulcus modulates conscious visual perception of fearful expressions in faces and bodies

    No full text
    The posterior Superior Temporal Suclus (pSTS) represents a central hub in the complex cerebral network for person perception and emotion recognition as also suggested by its heavy connections with face- and body-specific cortical (e.g., the fusiform face area, FFA and the extrastriate body area, EBA) and subcortical structures (e.g., amygdala). Information on whether pSTS is causatively involved in sustaining conscious visual perception of emotions expressed by faces and bodies is lacking. We explored this issue by combining a binocular rivalry procedure (where emotional and neutral face and body postures rivaled with house images) with off-line, 1-Hz repetitive transcranial magnetic stimulation (rTMS). We found that temporary inhibition of the right pSTS reduced perceptual dominance of fearful faces and increased perceptual dominance of fearful bodies, while leaving unaffected the perception of neutral face and body images. Inhibition of the vertex had no effect on conscious visual perception of neutral or emotional face or body stimuli. Thus, the right pSTS plays a causal role in shortening conscious vision of fearful faces and in prolonging conscious vision of fearful bodies. These results suggest that pSTS selectively modulates the activity of segregated networks involved in the conscious visual perception of emotional faces or bodies. We speculate that the opposite role of the right pSTS for conscious perception of fearful face and body may be explained by the different connections that this region entertains with face- and body-selective visual areas as well as with amygdalae and premotor region

    Results of experiment 2.

    No full text
    <p><i>Left:</i> Fear responses as a function of morphed emotional spoken sentences when masked neutral actions, fearful bodily expressions or no bodies were shown. <i>Right:</i> Fear responses corrected for baseline performance (no-body trials) as a function of morphed emotional spoken sentences when masked neutral actions or masked fearful bodily expressions were shown. Error bars represent standard error of the mean. Asterisks indicate <i>p</i><.001.</p

    Illustration of an example trial and example stimuli (experiment 2).

    No full text
    <p>An example of a trial of experiment 2 (<i>left</i>), an example of a fearful bodily expression and a neutral action (<i>upper right</i>) and the mask (<i>below right</i>).</p

    Illustration of an example trial and example stimuli (experiment 1).

    No full text
    <p>An example trial (<i>left</i>), an example of an angry and happy bodily posture (<i>upper right</i>), the mask (<i>below right</i>).</p

    Results of experiment 1.

    No full text
    <p><i>Left</i>: Mean categorization performance plotted as function of SOA latency corrected for chance (50 percent). <i>Right</i>: Mean confidence ratings plotted as function of SOA latency. Error bars represent standard error of the mean. SOA = Stimulus Onset Asynchrony, TO = Target Only.</p

    Breaking sitting with light activities vs structured exercise: a randomised crossover study demonstrating benefits for glycaemic control and insulin sensitivity in type 2 diabetes

    No full text
    AIMS/HYPOTHESIS: We aimed to examine the effects of breaking sitting with standing and light-intensity walking vs an energy-matched bout of structured exercise on 24 h glucose levels and insulin resistance in patients with type 2 diabetes. METHODS: In a randomised crossover study, 19 patients with type 2 diabetes (13 men/6 women, 63 ± 9 years old) who were not using insulin each followed three regimens under free-living conditions, each lasting 4 days: (1) Sitting: 4415 steps/day with 14 h sitting/day; (2) Exercise: 4823 steps/day with 1.1 h/day of sitting replaced by moderate- to vigorous-intensity cycling (at an intensity of 5.9 metabolic equivalents [METs]); and (3) Sit Less: 17,502 steps/day with 4.7 h/day of sitting replaced by standing and light-intensity walking (an additional 2.5 h and 2.2 h, respectively, compared with the hours spent doing these activities in the Sitting regimen). Blocked randomisation was performed using a block size of six regimen orders using sealed, non-translucent envelopes. Individuals who assessed the outcomes were blinded to group assignment. Meals were standardised during each intervention. Physical activity and glucose levels were assessed for 24 h/day by accelerometry (activPAL) and a glucose monitor (iPro2), respectively. The incremental AUC (iAUC) for 24 h glucose (primary outcome) and insulin resistance (HOMA2-IR) were assessed on days 4 and 5, respectively. RESULTS: The iAUC for 24 h glucose (mean ± SEM) was significantly lower during the Sit Less intervention than in Sitting (1263 ± 189 min × mmol/l vs 1974 ± 324 min × mmol/l; p = 0.002), and was similar between Sit Less and Exercise (Exercise: 1383 ± 194 min × mmol/l; p = 0.499). Exercise failed to improve HOMA2-IR compared with Sitting (2.06 ± 0.28 vs 2.16 ± 0.26; p = 0.177). In contrast, Sit Less (1.89 ± 0.26) significantly reduced HOMA2-IR compared with Exercise (p = 0.015) as well as Sitting (p = 0.001). CONCLUSIONS/INTERPRETATION: Breaking sitting with standing and light-intensity walking effectively improved 24 h glucose levels and improved insulin sensitivity in individuals with type 2 diabetes to a greater extent than structured exercise. Thus, our results suggest that breaking sitting with standing and light-intensity walking may be an alternative to structured exercise to promote glycaemic control in patients type 2 diabetes. TRIAL REGISTRATION: Clinicaltrials.gov NCT02371239 FUNDING: : The study was supported by a Kootstra grant from Maastricht University Medical Centre+, and the Dutch Heart Foundation. Financial support was also provided by Novo Nordisk BV, and Medtronic and Roche made the equipment available for continuous glucose monitoring
    corecore