804 research outputs found
Recommended from our members
Machine Learning Methods for Fusion and Inference of Simultaneous EEG and fMRI
Simultaneous electroencephalogram (EEG) and functional magnetic resonance imaging (fMRI) have gained increasing popularity in studying human cognition due to their potential to map the brain dynamics with high spatial and temporal fidelity. Such detailed mapping of the brain is crucial for understanding the neural mechanisms by which humans make perceptual decisions. Despite recent advances in data acquisition and analysis of simultaneous EEG-fMRI, the lack of effective computational tools for optimal fusion of the two modalities remains a major challenge. The goal of this dissertation is to provide a recipe of machine learning methods for fusion of simultaneous EEG-fMRI data. Specifically, we investigate three types of fusion approaches and apply them to study the whole-brain spatiotemporal dynamics during a rapid object recognition task where subjects discriminate face, car, and house images under ambiguity. We first use an asymmetric fusion approach capitalizing on temporal single-trial EEG variability to identify early and late neural subsystems selective to categorical choice of faces versus nonfaces. We find that the degree of interaction in these networks accounts for a substantial fraction of our bias to see faces. Based on a computational modeling of behavioral measures, we further dissociate separate neural correlates of the face decision bias modulated by varying levels of stimulus evidence. Secondly, we develop a state-space model based symmetric fusion approach to integrate EEG and fMRI in a probabilistic generative framework. We use a variational Bayesian method to infer the network connectivity among latent neural states shared by EEG and fMRI. Finally, we use a data-driven symmetric fusion approach to compare representations of the EEG and fMRI against those of a deep convolutional neural network (CNN) in a common similarity space. We show a spatiotemporal hierarchical correspondence in visual processing stages between the human brain and the CNN. Collectively, our results show that the spatiotemporal properties of neural circuits revealed by the analysis of simultaneous EEG-fMRI data can reflect the choice behavior of subjects during rapid perceptual decision making
The Relation of Ongoing Brain Activity, Evoked Neural Responses, and Cognition
Ongoing brain activity has been observed since the earliest neurophysiological recordings and is found over a wide range of temporal and spatial scales. It is characterized by remarkably large spontaneous modulations. Here, we review evidence for the functional role of these ongoing activity fluctuations and argue that they constitute an essential property of the neural architecture underlying cognition. The role of spontaneous activity fluctuations is probably best understood when considering both their spatiotemporal structure and their functional impact on cognition. We first briefly argue against a “segregationist” view on ongoing activity, both in time and space, which would selectively associate certain frequency bands or levels of spatial organization with specific functional roles. Instead, we emphasize the functional importance of the full range, from differentiation to integration, of intrinsic activity within a hierarchical spatiotemporal structure. We then highlight the flexibility and context-sensitivity of intrinsic functional connectivity that suggest its involvement in functionally relevant information processing. This role in information processing is pursued by reviewing how ongoing brain activity interacts with afferent and efferent information exchange of the brain with its environment. We focus on the relationship between the variability of ongoing and evoked brain activity, and review recent reports that tie ongoing brain activity fluctuations to variability in human perception and behavior. Finally, these observations are discussed within the framework of the free-energy principle which – applied to human brain function – provides a theoretical account for a non-random, coordinated interaction of ongoing and evoked activity in perception and behavior
a methodological approach
In natural environments, visual and auditory stimulation elicit responses
across a large set of brain regions in a fraction of a second, yielding
representations of the multimodal scene and its properties. The rapid and
complex neural dynamics underlying visual and auditory information processing
pose major challenges to human cognitive neuroscience. Brain signals measured
non-invasively are inherently noisy, the format of neural representations is
unknown, and transformations between representations are complex and often
nonlinear. Further, no single non-invasive brain measurement technique
provides a spatio-temporally integrated view. In this opinion piece, we argue
that progress can be made by a concerted effort based on three pillars of
recent methodological development: (i) sensitive analysis techniques such as
decoding and cross-classification, (ii) complex computational modelling using
models such as deep neural networks, and (iii) integration across imaging
methods (magnetoencephalography/electroencephalography, functional magnetic
resonance imaging) and models, e.g. using representational similarity
analysis. We showcase two recent efforts that have been undertaken in this
spirit and provide novel results about visual and auditory scene analysis.
Finally, we discuss the limits of this perspective and sketch a concrete
roadmap for future research
Decoding the consumer’s brain: Neural representations of consumer experience
Understanding consumer experience – what consumers think about brands, how they feel about services, whether they like certain products – is crucial to marketing practitioners. ‘Neuromarketing’, as the application of neuroscience in marketing research is called, has generated excitement with the promise of understanding consumers’ minds by probing their brains directly. Recent advances in neuroimaging analysis leverage machine learning and pattern classification techniques to uncover patterns from neuroimaging data that can be associated with thoughts and feelings. In this dissertation, I measure brain responses of consumers by functional magnetic resonance imaging (fMRI) in order to ‘decode’ their mind. In three different studies, I have demonstrated how different aspects of consumer experience can be studied with fMRI recordings. First, I study how consumers think about brand image by comparing their brain responses during passive viewing of visual templates (photos depicting various social scenarios) to those during active visualizing of a brand’s image. Second, I use brain responses during viewing of affective pictures to decode emotional responses during watching of movie-trailers. Lastly, I examine whether marketing videos that evoke s
Location representations of objects in cluttered scenes in the human brain
When we perceive a visual scene, we usually see an arrangement of multiple cluttered and
partly overlapping objects, like a park with trees and people in it. Spatial attention helps us
to prioritize relevant portions of such scenes to efficiently interact with our environments. In
previous experiments on object recognition, objects were often presented in isolation, and these
studies found that the location of objects is encoded early in time (before ∼150 ms) and in early
visual cortex or in the dorsal stream. However, in real life objects rarely appear in isolation but
are instead embedded in cluttered scenes. Encoding the location of an object in clutter might
require fundamentally different neural computations. Therefore this dissertation addressed the
question of how location representations of objects on cluttered backgrounds are encoded in
the human brain. To answer this question, we investigated where in cortical space and when in
neural processing time location representations emerge when objects are presented on cluttered
backgrounds and which role spatial attention plays for the encoding of object location. We
addressed these questions in two studies, both including fMRI and EEG experiments. The results
of the first study showed that location representations of objects on cluttered backgrounds emerge
along the ventral visual stream, peaking in region LOC with a temporal delay that was linked to
recurrent processing. The second study showed that spatial attention modulated those location
representations in mid- and high-level regions along the ventral stream and late in time (after
∼150 ms), independently of whether backgrounds were cluttered or not. These findings show
that location representations emerge during late stages of processing both in cortical space and
in neural processing time when objects are presented on cluttered backgrounds and that they
are enhanced by spatial attention. Our results provide a new perspective on visual information
processing in the ventral visual stream and on the temporal dynamics of location processing.
Finally, we discuss how shared neural substrates of location and category representations in the
brain might improve object recognition for real-world vision
Change blindness: eradication of gestalt strategies
Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task
The spatiotemporal neural dynamics underlying perceived similarity for real-world objects.
The degree to which we perceive real-world objects as similar or dissimilar structures our perception and guides categorization behavior. Here, we investigated the neural representations enabling perceived similarity using behavioral judgments, fMRI and MEG. As different object dimensions co-occur and partly correlate, to understand the relationship between perceived similarity and brain activity it is necessary to assess the unique role of multiple object dimensions. We thus behaviorally assessed perceived object similarity in relation to shape, function, color and background. We then used representational similarity analyses to relate these behavioral judgments to brain activity. We observed a link between each object dimension and representations in visual cortex. These representations emerged rapidly within 200 ms of stimulus onset. Assessing the unique role of each object dimension revealed partly overlapping and distributed representations: while color-related representations distinctly preceded shape-related representations both in the processing hierarchy of the ventral visual pathway and in time, several dimensions were linked to high-level ventral visual cortex. Further analysis singled out the shape dimension as neither fully accounted for by supra-category membership, nor a deep neural network trained on object categorization. Together our results comprehensively characterize the relationship between perceived similarity of key object dimensions and neural activity
Psyche, Signals and Systems
For a century or so, the multidisciplinary nature of neuroscience has left the field fractured into distinct areas of research. In particular, the subjects of consciousness and perception present unique challenges in the attempt to build a unifying understanding bridging between the micro-, meso-, and macro-scales of the brain and psychology. This chapter outlines an integrated view of the neurophysiological systems, psychophysical signals, and theoretical considerations related to consciousness. First, we review the signals that correlate to consciousness during psychophysics experiments. We then review the underlying neural mechanisms giving rise to these signals. Finally, we discuss the computational and theoretical functions of such neural mechanisms, and begin to outline means in which these are related to ongoing theoretical research
- …