31 research outputs found

    Application Patterns

    Get PDF

    CoSMoMVPA: Multi-Modal Multivariate Pattern Analysis of Neuroimaging Data in Matlab/GNU Octave

    Get PDF
    Recent years have seen an increase in the popularity of multivariate pattern (MVP) analysis of functional magnetic resonance (fMRI) data, and, to a much lesser extent, magneto- and electro-encephalography (M/EEG) data. We present CoSMoMVPA, a lightweight MVPA (MVP analysis) toolbox implemented in the intersection of the Matlab and GNU Octave languages, that treats both fMRI and M/EEG data as first-class citizens. CoSMoMVPA supports all state-of-the-art MVP analysis techniques, including searchlight analyses, classification, correlations, representational similarity analysis, and the time generalization method. These can be used to address both data-driven and hypothesis-driven questions about neural organization and representations, both within and across: space, time, frequency bands, neuroimaging modalities, individuals, and species. It uses a uniform data representation of fMRI data in the volume or on the surface, and of M/EEG data at the sensor and source level. Through various external toolboxes, it directly supports reading and writing a variety of fMRI and M/EEG neuroimaging formats, and, where applicable, can convert between them. As a result, it can be integrated readily in existing pipelines and used with existing preprocessed datasets. CoSMoMVPA overloads the traditional volumetric searchlight concept to support neighborhoods for M/EEG and surface-based fMRI data, which supports localization of multivariate effects of interest across space, time, and frequency dimensions. CoSMoMVPA also provides a generalized approach to multiple comparison correction across these dimensions using Threshold-Free Cluster Enhancement with state-of-the-art clustering and permutation techniques. CoSMoMVPA is highly modular and uses abstractions to provide a uniform interface for a variety of MVP measures. Typical analyses require a few lines of code, making it accessible to beginner users. At the same time, expert programmers can easily extend its functionality. CoSMoMVPA comes with extensive documentation, including a variety of runnable demonstration scripts and analysis exercises (with example data and solutions). It uses best software engineering practices including version control, distributed development, an automated test suite, and continuous integration testing. It can be used with the proprietary Matlab and the free GNU Octave software, and it complies with open source distribution platforms such as NeuroDebian. CoSMoMVPA is Free/Open Source Software under the permissive MIT license. Website: http://cosmomvpa.org Source code: https://github.com/CoSMoMVPA/CoSMoMVPA

    MEG Multivariate Analysis Reveals Early Abstract Action Representations in the Lateral Occipitotemporal Cortex

    Get PDF
    Understanding other people's actions is a fundamental prerequisite for social interactions. Whether action understanding relies on simulating the actions of others in the observers' motor system or on the access to conceptual knowledge stored in nonmotor areas is strongly debated. It has been argued previously that areas that play a crucial role in action understanding should (1) distinguish between different actions, (2) generalize across the ways in which actions are performed (Dinstein et al., 2008; Oosterhof et al., 2013; Caramazza et al., 2014), and (3) have access to action information around the time of action recognition (Hauk et al., 2008). Whereas previous studies focused on the first two criteria, little is known about the dynamics underlying action understanding. We examined which human brain regions are able to distinguish between pointing and grasping, regardless of reach direction (left or right) and effector (left or right hand), using multivariate pattern analysis of magnetoencephalography data. We show that the lateral occipitotemporal cortex (LOTC) has the earliest access to abstract action representations, which coincides with the time point from which there was enough information to allow discriminating between the two actions. By contrast, precentral regions, though recruited early, have access to such abstract representations substantially later. Our results demonstrate that in contrast to the LOTC, the early recruitment of precentral regions does not contain the detailed information that is required to recognize an action. We discuss previous theoretical claims of motor theories and how they are incompatible with our data.SIGNIFICANCE STATEMENTIt is debated whether our ability to understand other people's actions relies on the simulation of actions in the observers' motor system, or is based on access to conceptual knowledge stored in nonmotor areas. Here, using magnetoencephalography in combination with machine learning, we examined where in the brain and at which point in time it is possible to distinguish between pointing and grasping actions regardless of the way in which they are performed (effector, reach direction). We show that, in contrast to the predictions of motor theories of action understanding, the lateral occipitotemporal cortex has access to abstract action representations substantially earlier than precentral regions.</jats:p

    The Neural Dynamics of Attentional Selection in Natural Scenes

    Get PDF
    The human visual system can only represent a small subset of the many objects present in cluttered scenes at any given time, such that objects compete for representation. Despite these processing limitations, the detection of object categories in cluttered natural scenes is remarkably rapid. How does the brain efficiently select goal-relevant objects from cluttered scenes? In the present study, we used multivariate decoding of magneto-encephalography (MEG) data to track the neural representation of within-scene objects as a function of top-down attentional set. Participants detected categorical targets (cars or people) in natural scenes. The presence of these categories within a scene was decoded from MEG sensor patterns by training linear classifiers on differentiating cars and people in isolation and testing these classifiers on scenes containing one of the two categories. The presence of a specific category in a scene could be reliably decoded from MEG response patterns as early as 160 ms, despite substantial scene clutter and variation in the visual appearance of each category. Strikingly, we find that these early categorical representations fully depend on the match between visual input and top-down attentional set: only objects that matched the current attentional set were processed to the category level within the first 200 ms after scene onset. A sensor-space searchlight analysis revealed that this early attention bias was localized to lateral occipitotemporal cortex, reflecting top-down modulation of visual processing. These results show that attention quickly resolves competition between objects in cluttered natural scenes, allowing for the rapid neural representation of goal-relevant objects. SIGNIFICANCE STATEMENT: Efficient attentional selection is crucial in many everyday situations. For example, when driving a car, we need to quickly detect obstacles, such as pedestrians crossing the street, while ignoring irrelevant objects. How can humans efficiently perform such tasks, given the multitude of objects contained in real-world scenes? Here we used multivariate decoding of magnetoencephalogaphy data to characterize the neural underpinnings of attentional selection in natural scenes with high temporal precision. We show that brain activity quickly tracks the presence of objects in scenes, but crucially only for those objects that were immediately relevant for the participant. These results provide evidence for fast and efficient attentional selection that mediates the rapid detection of goal-relevant objects in real-world environments

    Differential activation of frontoparietal attention networks by social and symbolic spatial cues

    Get PDF
    Perception of both gaze-direction and symbolic directional cues (e.g. arrows) orient an observer’s attention toward the indicated location. It is unclear, however, whether these similar behavioral effects are examples of the same attentional phenomenon and, therefore, subserved by the same neural substrate. It has been proposed that gaze, given its evolutionary significance, constitutes a ‘special’ category of spatial cue. As such, it is predicted that the neural systems supporting spatial reorienting will be different for gaze than for non-biological symbols. We tested this prediction using functional magnetic resonance imaging to measure the brain’s response during target localization in which laterally presented targets were preceded by uninformative gaze or arrow cues. Reaction times were faster during valid than invalid trials for both arrow and gaze cues. However, differential patterns of activity were evoked in the brain. Trials including invalid rather than valid arrow cues resulted in a stronger hemodynamic response in the ventral attention network. No such difference was seen during trials including valid and invalid gaze cues. This differential engagement of the ventral reorienting network is consistent with the notion that the facilitation of target detection by gaze cues and arrow cues is subserved by different neural substrates

    Differential Activation of Frontoparietal Attention Networks by Social and Symbolic Spatial Cues

    Get PDF
    Perception of both gaze-direction and symbolic directional cues (e.g. arrows) orient an observer’s attention toward the indicated location. It is unclear, however, whether these similar behavioral effects are examples of the same attentional phenomenon and, therefore, subserved by the same neural substrate. It has been proposed that gaze, given its evolutionary significance, constitutes a ‘special’ category of spatial cue. As such, it is predicted that the neural systems supporting spatial reorienting will be different for gaze than for non-biological symbols. We tested this prediction using functional magnetic resonance imaging to measure the brain’s response during target localization in which laterally presented targets were preceded by uninformative gaze or arrow cues. Reaction times were faster during valid than invalid trials for both arrow and gaze cues. However, differential patterns of activity were evoked in the brain. Trials including invalid rather than valid arrow cues resulted in a stronger hemodynamic response in the ventral attention network. No such difference was seen during trials including valid and invalid gaze cues. This differential engagement of the ventral reorienting network is consistent with the notion that the facilitation of target detection by gaze cues and arrow cues is subserved by different neural substrates

    Task-invariant brain responses to the social value of faces

    Get PDF
    Abstract â–  In two fMRI experiments (n = 44) using tasks with different demands-approach-avoidance versus one-back recognition decisions-we measured the responses to the social value of faces. The face stimuli were produced by a parametric model of face evaluation that reduces multiple social evaluations to two orthogonal dimensions of valence and power [Oosterhof, N. N., &amp; Todorov, A. The functional basis of face evaluation

    Shared perceptual basis of emotional expressions and trustworthiness impressions from faces

    Get PDF
    Using a dynamic stimuli paradigm, in which faces expressed either happiness or anger, the authors tested the hypothesis that perceptions of trustworthiness are related to these expressions. Although the same emotional intensity was added to both trustworthy and untrustworthy faces, trustworthy faces who expressed happiness were perceived as happier than untrustworthy faces, and untrustworthy faces who expressed anger were perceived as angrier than trustworthy faces. The authors also manipulated changes in face trustworthiness simultaneously with the change in expression. Whereas transitions in face trustworthiness in the direction of the expressed emotion (e.g., high-to-low trustworthiness and anger) increased the perceived intensity of the emotion, transitions in the opposite direction decreased this intensity. For example, changes from high to low trustworthiness increased the intensity of perceived anger but decreased the intensity of perceived happiness. These findings support the hypothesis that changes along the trustworthiness dimension correspond to subtle changes resembling expressions signaling whether the person displaying the emotion should be avoided or approached
    corecore