235 research outputs found
In the Blink of an Eye: Neural Responses Elicited to Viewing the Eye Blinks of Another Individual
Facial movements have the potential to be powerful social signals. Previous studies have shown that eye gaze changes and simple mouth movements can elicit robust neural responses, which can be altered as a function of potential social significance. Eye blinks are frequent events and are usually not deliberately communicative, yet blink rate is known to influence social perception. Here, we studied event-related potentials (ERPs) elicited to observing non-task relevant blinks, eye closure, and eye gaze changes in a centrally presented natural face stimulus. Our first hypothesis (H1) that blinks would produce robust ERPs (N170 and later ERP components) was validated, suggesting that the brain may register and process all types of eye movement for potential social relevance. We also predicted an amplitude gradient for ERPs as a function of gaze change, relative to eye closure and then blinks (H2). H2 was only partly validated: large temporo-occipital N170s to all eye change conditions were observed and did not significantly differ between blinks and other conditions. However, blinks elicited late ERPs that, although robust, were significantly smaller relative to gaze conditions. Our data indicate that small and task-irrelevant facial movements such as blinks are measurably registered by the observer's brain. This finding is suggestive of the potential social significance of blinks which, in turn, has implications for the study of social cognition and use of real-life social scenarios
Meta-analyses support a taxonomic model for representations of different categories of audio-visual interaction events in the human brain
Our ability to perceive meaningful action events involving objects, people and other animate agents is characterized in part by an interplay of visual and auditory sensory processing and their cross-modal interactions. However, this multisensory ability can be altered or dysfunctional in some hearing and sighted individuals, and in some clinical populations. The present meta-analysis sought to test current hypotheses regarding neurobiological architectures that may mediate audio-visual multisensory processing. Reported coordinates from 82 neuroimaging studies (137 experiments) that revealed some form of audio-visual interaction in discrete brain regions were compiled, converted to a common coordinate space, and then organized along specific categorical dimensions to generate activation likelihood estimate (ALE) brain maps and various contrasts of those derived maps. The results revealed brain regions (cortical “hubs”) preferentially involved in multisensory processing along different stimulus category dimensions, including (1) living versus non-living audio-visual events, (2) audio-visual events involving vocalizations versus actions by living sources, (3) emotionally valent events, and (4) dynamic-visual versus static-visual audio-visual stimuli. These meta-analysis results are discussed in the context of neurocomputational theories of semantic knowledge representations and perception, and the brain volumes of interest are available for download to facilitate data interpretation for future neuroimaging studies
Multiple faces elicit augmented neural activity
How do our brains respond when we are being watched by a group of people? Despite the large volume of literature devoted to face processing, this question has received very little attention. Here we measured the effects on the face-sensitive N170 and other ERPs to viewing displays of one, two and three faces in two experiments. In Experiment 1, overall image brightness and contrast were adjusted to be constant, whereas in Experiment 2 local contrast and brightness of individual faces were not manipulated. A robust positive-negative-positive (P100-N170-P250) ERP complex and an additional late positive ERP, the P400, were elicited to all stimulus types. As the number of faces in the display increased, N170 amplitude increased for both stimulus sets, and latency increased in Experiment 2. P100 latency and P250 amplitude were affected by changes in overall brightness and contrast, but not by the number of faces in the display per se. In Experiment 1 when overall brightness and contrast were adjusted to be constant, later ERP (P250 and P400) latencies showed differences as a function of hemisphere. Hence, our data indicate that N170 increases its magnitude when multiple faces are seen, apparently impervious to basic low-level stimulus features including stimulus size. Outstanding questions remain regarding category-sensitive neural activity that is elicited to viewing multiple items of stimulus categories other than faces
Breastfeeding Duration Is Associated with Regional, but Not Global, Differences in White Matter Tracts
Extended breastfeeding through infancy confers benefits on neurocognitive performance and intelligence tests, though few have examined the biological basis of these effects. To investigate correlations with breastfeeding, we examined the major white matter tracts in 4–8 year-old children using diffusion tensor imaging and volumetric measurements of the corpus callosum. We found a significant correlation between the duration of infant breastfeeding and fractional anisotropy scores in left-lateralized white matter tracts, including the left superior longitudinal fasciculus and left angular bundle, which is indicative of greater intrahemispheric connectivity. However, in contrast to expectations from earlier studies, no correlations were observed with corpus callosum size, and thus no correlations were observed when using such measures of global interhemispheric white matter connectivity development. These findings suggest a complex but significant positive association between breastfeeding duration and white matter connectivity, including in pathways known to be functionally relevant for reading and language development
Concept of an Upright Wearable Positron Emission Tomography Imager in Humans
Background: Positron Emission Tomography (PET) is traditionally used to image patients in restrictive positions, with few devices allowing for upright, brain-dedicated imaging. Our team has explored the concept of wearable PET imagers which could provide functional brain imaging of freely moving subjects. To test feasibility and determine future considerations for development, we built a rudimentary proof-of-concept prototype (Helmet_PET) and conducted tests in phantoms and four human volunteers. Methods: Twelve Silicon Photomultiplier-based detectors were assembled in a ring with exterior weight support and an interior mechanism that could be adjustably fitted to the head. We conducted brain phantom tests as well as scanned four patients scheduled for diagnostic F18-FDG PET/CT imaging. For human subjects the imager was angled such that field of view included basal ganglia and visual cortex to test for typical resting-state pattern. Imaging in two subjects was performed ~4 hr after PET/CT imaging to simulate lower injected F18-FDG dose by taking advantage of the natural radioactive decay of the tracer (F18 half-life of 110 min), with an estimated imaging dosage of 25% of the standard. Results: We found that imaging with a simple lightweight ring of detectors was feasible using a fraction of the standard radioligand dose. Activity levels in the human participants were quantitatively similar to standard PET in a set of anatomical ROIs. Typical resting-state brain pattern activation was demonstrated even in a 1 min scan of active head rotation. Conclusion: To our knowledge, this is the first demonstration of imaging a human subject with a novel wearable PET imager that moves with robust head movements. We discuss potential research and clinical applications that will drive the design of a fully functional device. Designs will need to consider trade-offs between a low weight device with high mobility and a heavier device with greater sensitivity and larger field of view
Why Should We Study Experience More Systematically: Neurophenomenology and Modern Cognitive Science
In the article I will defend the view that cognitive science needs to use first- and second-person methods more systematically, as part of everyday research practice, if it wants to understand the human mind in its full scope. Neurophenomenological programme proposed by Varela as a remedy for the hard problem of consciousness (i.e. the problem of experience) does not solve it on the ontological level. Nevertheless, it represents a good starting point of how to tackle the phenomenon of experience in a more systematic, methodologically sound way. On the other hand, Varela’s criterion of phenomenological reduction as a necessary condition for systematic investigation of experience is too strong. Regardless of that and some other problems that research of experience faces (e.g. the problem of training, the question of what kind of participants we want to study), it is becoming clear that investigating experience seriously – from first- and second-person perspective – is a necessary step cognitive science must take. This holds especially when researching phenomena that involve consciousness and/or where differentiation between conscious and unconscious processing is crucial. Furthermore, gathering experiential data is essential for interpreting experimental results gained purely by quantitative methods – especially when we are implicitly or explicitly referring to experience in our conclusions and interpretations. To support these claims some examples from the broader area of decision making will be given (the effect of deliberation-without-attention, cognitive reflection test)
Mindful breath awareness meditation facilitates efficiency gains in brain networks: A steady-state visually evoked potentials study
The beneficial effects of mindfulness-based therapeutic interventions have stimulated a rapidly growing body of scientific research into underlying psychological processes. Resulting evidence indicates that engaging with mindfulness meditation is associated with increased performance on a range of cognitive tasks. However, the mechanisms promoting these improvements require further investigation. We studied changes in behavioural performance of 34 participants during a multiple object tracking (MOT) task that taps core cognitive processes, namely sustained selective visual attention and spatial working memory. Concurrently, we recorded the steady-state visually evoked potential (SSVEP), an EEG signal elicited by the continuously flickering moving objects, and indicator of attentional engagement. Participants were tested before and after practicing eight weeks of mindful breath awareness meditation or progressive muscle relaxation as active control condition. The meditation group improved their MOT-performance and exhibited a reduction of SSVEP amplitudes, whereas no such changes were observed in the relaxation group. Neither group changed in self-reported positive affect and mindfulness, while a marginal increase in negative affect was observed in the mindfulness group. This novel way of combining MOT and SSVEP provides the important insight that mindful breath awareness meditation may lead to refinements of attention networks, enabling more efficient use of attentional resources
Recommended from our members
Time perception and the experience of agency in meditation and hypnosis
Mindfulness meditation and hypnosis are related in opposing ways to awareness of intentions. The cold control theory of hypnosis proposes that hypnotic responding involves the experience of involuntariness while performing an actually intentional action. Hypnosis therefore relies upon inaccurate metacognition about intentional actions and experiences. Mindfulness meditation centrally involves awareness of intentions and is associated with improved metacognitive access to intentions. Therefore, mindfulness meditators and highly hypnotizable people may lie at opposite ends of a spectrum with regard to metacognitive access to intention‐related information. Here we review the theoretical background and evidence for differences in the metacognition of intentions in these groups, as revealed by chronometric measures of the awareness of voluntary action: the timing of an intention to move (Libet's “W” judgments) and the compressed perception of time between an intentional action and its outcome (“intentional binding”). We review these measures and critically evaluate their proposed connection to the experience of volition and sense of agency
- …
