345 research outputs found

    In the Blink of an Eye: Neural Responses Elicited to Viewing the Eye Blinks of Another Individual

    Get PDF
    Facial movements have the potential to be powerful social signals. Previous studies have shown that eye gaze changes and simple mouth movements can elicit robust neural responses, which can be altered as a function of potential social significance. Eye blinks are frequent events and are usually not deliberately communicative, yet blink rate is known to influence social perception. Here, we studied event-related potentials (ERPs) elicited to observing non-task relevant blinks, eye closure, and eye gaze changes in a centrally presented natural face stimulus. Our first hypothesis (H1) that blinks would produce robust ERPs (N170 and later ERP components) was validated, suggesting that the brain may register and process all types of eye movement for potential social relevance. We also predicted an amplitude gradient for ERPs as a function of gaze change, relative to eye closure and then blinks (H2). H2 was only partly validated: large temporo-occipital N170s to all eye change conditions were observed and did not significantly differ between blinks and other conditions. However, blinks elicited late ERPs that, although robust, were significantly smaller relative to gaze conditions. Our data indicate that small and task-irrelevant facial movements such as blinks are measurably registered by the observer's brain. This finding is suggestive of the potential social significance of blinks which, in turn, has implications for the study of social cognition and use of real-life social scenarios

    Multiple faces elicit augmented neural activity

    Get PDF
    How do our brains respond when we are being watched by a group of people? Despite the large volume of literature devoted to face processing, this question has received very little attention. Here we measured the effects on the face-sensitive N170 and other ERPs to viewing displays of one, two and three faces in two experiments. In Experiment 1, overall image brightness and contrast were adjusted to be constant, whereas in Experiment 2 local contrast and brightness of individual faces were not manipulated. A robust positive-negative-positive (P100-N170-P250) ERP complex and an additional late positive ERP, the P400, were elicited to all stimulus types. As the number of faces in the display increased, N170 amplitude increased for both stimulus sets, and latency increased in Experiment 2. P100 latency and P250 amplitude were affected by changes in overall brightness and contrast, but not by the number of faces in the display per se. In Experiment 1 when overall brightness and contrast were adjusted to be constant, later ERP (P250 and P400) latencies showed differences as a function of hemisphere. Hence, our data indicate that N170 increases its magnitude when multiple faces are seen, apparently impervious to basic low-level stimulus features including stimulus size. Outstanding questions remain regarding category-sensitive neural activity that is elicited to viewing multiple items of stimulus categories other than faces

    Meta-analyses support a taxonomic model for representations of different categories of audio-visual interaction events in the human brain

    Get PDF
    Our ability to perceive meaningful action events involving objects, people and other animate agents is characterized in part by an interplay of visual and auditory sensory processing and their cross-modal interactions. However, this multisensory ability can be altered or dysfunctional in some hearing and sighted individuals, and in some clinical populations. The present meta-analysis sought to test current hypotheses regarding neurobiological architectures that may mediate audio-visual multisensory processing. Reported coordinates from 82 neuroimaging studies (137 experiments) that revealed some form of audio-visual interaction in discrete brain regions were compiled, converted to a common coordinate space, and then organized along specific categorical dimensions to generate activation likelihood estimate (ALE) brain maps and various contrasts of those derived maps. The results revealed brain regions (cortical “hubs”) preferentially involved in multisensory processing along different stimulus category dimensions, including (1) living versus non-living audio-visual events, (2) audio-visual events involving vocalizations versus actions by living sources, (3) emotionally valent events, and (4) dynamic-visual versus static-visual audio-visual stimuli. These meta-analysis results are discussed in the context of neurocomputational theories of semantic knowledge representations and perception, and the brain volumes of interest are available for download to facilitate data interpretation for future neuroimaging studies

    Breastfeeding Duration Is Associated with Regional, but Not Global, Differences in White Matter Tracts

    Get PDF
    Extended breastfeeding through infancy confers benefits on neurocognitive performance and intelligence tests, though few have examined the biological basis of these effects. To investigate correlations with breastfeeding, we examined the major white matter tracts in 4–8 year-old children using diffusion tensor imaging and volumetric measurements of the corpus callosum. We found a significant correlation between the duration of infant breastfeeding and fractional anisotropy scores in left-lateralized white matter tracts, including the left superior longitudinal fasciculus and left angular bundle, which is indicative of greater intrahemispheric connectivity. However, in contrast to expectations from earlier studies, no correlations were observed with corpus callosum size, and thus no correlations were observed when using such measures of global interhemispheric white matter connectivity development. These findings suggest a complex but significant positive association between breastfeeding duration and white matter connectivity, including in pathways known to be functionally relevant for reading and language development

    Why Should We Study Experience More Systematically: Neurophenomenology and Modern Cognitive Science

    Get PDF
    In the article I will defend the view that cognitive science needs to use first- and second-person methods more systematically, as part of everyday research practice, if it wants to understand the human mind in its full scope. Neurophenomenological programme proposed by Varela as a remedy for the hard problem of consciousness (i.e. the problem of experience) does not solve it on the ontological level. Nevertheless, it represents a good starting point of how to tackle the phenomenon of experience in a more systematic, methodologically sound way. On the other hand, Varela’s criterion of phenomenological reduction as a necessary condition for systematic investigation of experience is too strong. Regardless of that and some other problems that research of experience faces (e.g. the problem of training, the question of what kind of participants we want to study), it is becoming clear that investigating experience seriously – from first- and second-person perspective – is a necessary step cognitive science must take. This holds especially when researching phenomena that involve consciousness and/or where differentiation between conscious and unconscious processing is crucial. Furthermore, gathering experiential data is essential for interpreting experimental results gained purely by quantitative methods – especially when we are implicitly or explicitly referring to experience in our conclusions and interpretations. To support these claims some examples from the broader area of decision making will be given (the effect of deliberation-without-attention, cognitive reflection test)

    Primary visual cortex activity along the apparent-motion trace reflects illusory perception

    Get PDF
    The illusion of apparent motion can be induced when visual stimuli are successively presented at different locations. It has been shown in previous studies that motion-sensitive regions in extrastriate cortex are relevant for the processing of apparent motion, but it is unclear whether primary visual cortex (V1) is also involved in the representation of the illusory motion path. We investigated, in human subjects, apparent-motion-related activity in patches of V1 representing locations along the path of illusory stimulus motion using functional magnetic resonance imaging. Here we show that apparent motion caused a blood-oxygenation-level-dependent response along the V1 representations of the apparent-motion path, including regions that were not directly activated by the apparent-motion-inducing stimuli. This response was unaltered when participants had to perform an attention-demanding task that diverted their attention away from the stimulus. With a bistable motion quartet, we confirmed that the activity was related to the conscious perception of movement. Our data suggest that V1 is part of the network that represents the illusory path of apparent motion. The activation in V1 can be explained either by lateral interactions within V1 or by feedback mechanisms from higher visual areas, especially the motion-sensitive human MT/V5 complex

    Concept of an Upright Wearable Positron Emission Tomography Imager in Humans

    Get PDF
    Background: Positron Emission Tomography (PET) is traditionally used to image patients in restrictive positions, with few devices allowing for upright, brain-dedicated imaging. Our team has explored the concept of wearable PET imagers which could provide functional brain imaging of freely moving subjects. To test feasibility and determine future considerations for development, we built a rudimentary proof-of-concept prototype (Helmet_PET) and conducted tests in phantoms and four human volunteers. Methods: Twelve Silicon Photomultiplier-based detectors were assembled in a ring with exterior weight support and an interior mechanism that could be adjustably fitted to the head. We conducted brain phantom tests as well as scanned four patients scheduled for diagnostic F18-FDG PET/CT imaging. For human subjects the imager was angled such that field of view included basal ganglia and visual cortex to test for typical resting-state pattern. Imaging in two subjects was performed ~4 hr after PET/CT imaging to simulate lower injected F18-FDG dose by taking advantage of the natural radioactive decay of the tracer (F18 half-life of 110 min), with an estimated imaging dosage of 25% of the standard. Results: We found that imaging with a simple lightweight ring of detectors was feasible using a fraction of the standard radioligand dose. Activity levels in the human participants were quantitatively similar to standard PET in a set of anatomical ROIs. Typical resting-state brain pattern activation was demonstrated even in a 1 min scan of active head rotation. Conclusion: To our knowledge, this is the first demonstration of imaging a human subject with a novel wearable PET imager that moves with robust head movements. We discuss potential research and clinical applications that will drive the design of a fully functional device. Designs will need to consider trade-offs between a low weight device with high mobility and a heavier device with greater sensitivity and larger field of view

    Neural mechanisms underlying visual attention to healthwarnings on branded and plain cigarette packs

    Get PDF
    AIMS: To (1) test if activation in brain regions related to reward (nucleus accumbens) and emotion (amygdala) differ when branded and plain packs of cigarettes are viewed, (2) test whether these activation patterns differ by smoking status and (3) examine whether activation patterns differ as a function of visual attention to health warning labels on cigarette packs. DESIGN: Cross‐sectional observational study combining functional magnetic resonance imaging (fMRI) with eye‐tracking. Non‐smokers, weekly smokers and daily smokers performed a memory task on branded and plain cigarette packs with pictorial health warnings presented in an event‐related design. SETTING: Clinical Research and Imaging Centre, University of Bristol, UK. PARTICIPANTS: Non‐smokers, weekly smokers and daily smokers (n = 72) were tested. After exclusions, data from 19 non‐smokers, 19 weekly smokers and 20 daily smokers were analysed. MEASUREMENTS: Brain activity was assessed in whole brain analyses and in pre‐specified masked analyses in the amygdala and nucleus accumbens. On‐line eye‐tracking during scanning recorded visual attention to health warnings. FINDINGS: There was no evidence for a main effect of pack type or smoking status in either the nucleus accumbens or amygdala, and this was unchanged when taking account of visual attention to health warnings. However, there was evidence for an interaction, such that we observed increased activation in the right amygdala when viewing branded as compared with plain packs among weekly smokers (P = 0.003). When taking into account visual attention to health warnings, we observed higher levels of activation in the visual cortex in response to plain packaging compared with branded packaging of cigarettes (P = 0.020). CONCLUSIONS: Based on functional magnetic resonance imaging and eye‐tracking data, health warnings appear to be more salient on ‘plain’ cigarette packs than branded packs

    Color responses of the human lateral geniculate nucleus: selective amplification of S-cone signals between the lateral geniculate nucleno and primary visual cortex measured with high-field fMRI

    Get PDF
    The lateral geniculate nucleus (LGN) is the primary thalamic nucleus that relays visual information from the retina to the primary visual cortex (V1) and has been extensively studied in non-human primates. A key feature of the LGN is the segregation of retinal inputs into different cellular layers characterized by their differential responses to red-green (RG) color (L/M opponent), blue-yellow (BY) color (S-cone opponent) and achromatic (Ach) contrast. In this study we use high-field functional magnetic resonance imaging (4 tesla, 3.6 × 3.6 × 3 mm3) to record simultaneously the responses of the human LGN and V1 to chromatic and Ach contrast to investigate the LGN responses to color, and how these are modified as information transfers between LGN and cortex. We find that the LGN has a robust response to RG color contrast, equal to or greater than the Ach response, but a significantly poorer sensitivity to BY contrast. In V1 at low temporal rates (2 Hz), however, the sensitivity of the BY color pathway is selectively enhanced, rising in relation to the RG and Ach responses. We find that this effect generalizes across different stimulus contrasts and spatial stimuli (1-d and 2-d patterns), but is selective for temporal frequency, as it is not found for stimuli at 8 Hz. While the mechanism of this cortical enhancement of BY color vision and its dynamic component is unknown, its role may be to compensate for a weak BY signal originating from the sparse distribution of neurons in the retina and LGN
    • …
    corecore