139 research outputs found

    An introduction to time-resolved decoding analysis for M/EEG

    Full text link
    The human brain is constantly processing and integrating information in order to make decisions and interact with the world, for tasks from recognizing a familiar face to playing a game of tennis. These complex cognitive processes require communication between large populations of neurons. The non-invasive neuroimaging methods of electroencephalography (EEG) and magnetoencephalography (MEG) provide population measures of neural activity with millisecond precision that allow us to study the temporal dynamics of cognitive processes. However, multi-sensor M/EEG data is inherently high dimensional, making it difficult to parse important signal from noise. Multivariate pattern analysis (MVPA) or "decoding" methods offer vast potential for understanding high-dimensional M/EEG neural data. MVPA can be used to distinguish between different conditions and map the time courses of various neural processes, from basic sensory processing to high-level cognitive processes. In this chapter, we discuss the practical aspects of performing decoding analyses on M/EEG data as well as the limitations of the method, and then we discuss some applications for understanding representational dynamics in the human brain

    Overfitting the literature to one set of stimuli and data

    Get PDF
    The fast-growing field of Computational Cognitive Neuroscience is on track to meet its first crisis. A large number of papers in this nascent field are developing and testing novel analysis methods using the same stimuli and neuroimaging datasets. Publication bias and confirmatory exploration will result in overfitting to the limited available data. The field urgently needs to collect more good quality open neuroimaging data using a variety of experimental stimuli, to test the generalisability of current published results, and allow for more robust results in future work

    Decoding dynamic brain patterns from evoked responses: A tutorial on multivariate pattern analysis applied to time-series neuroimaging data

    Get PDF
    Multivariate pattern analysis (MVPA) or brain decoding methods have become standard practice in analysing fMRI data. Although decoding methods have been extensively applied in Brain Computing Interfaces (BCI), these methods have only recently been applied to time-series neuroimaging data such as MEG and EEG to address experimental questions in Cognitive Neuroscience. In a tutorial-style review, we describe a broad set of options to inform future time-series decoding studies from a Cognitive Neuroscience perspective. Using example MEG data, we illustrate the effects that different options in the decoding analysis pipeline can have on experimental results where the aim is to 'decode' different perceptual stimuli or cognitive states over time from dynamic brain activation patterns. We show that decisions made at both preprocessing (e.g., dimensionality reduction, subsampling, trial averaging) and decoding (e.g., classifier selection, cross-validation design) stages of the analysis can significantly affect the results. In addition to standard decoding, we describe extensions to MVPA for time-varying neuroimaging data including representational similarity analysis, temporal generalisation, and the interpretation of classifier weight maps. Finally, we outline important caveats in the design and interpretation of time-series decoding experiments.Comment: 64 pages, 15 figure

    Unique contributions of perceptual and conceptual humanness to object representations in the human brain

    Get PDF
    The human brain is able to quickly and accurately identify objects in a dynamic visual world. Objects evoke different patterns of neural activity in the visual system, which reflect object category memberships. However, the underlying dimensions of object representations in the brain remain unclear. Recent research suggests that objects similarity to humans is one of the main dimensions used by the brain to organise objects, but the nature of the human-similarity features driving this organisation are still unknown. Here, we investigate the relative contributions of perceptual and conceptual features of humanness to the representational organisation of objects in the human visual system. We collected behavioural judgements of human-similarity of various objects, which were compared with time-resolved neuroimaging responses to the same objects. The behavioural judgement tasks targeted either perceptual or conceptual humanness features to determine their respective contribution to perceived human-similarity. Behavioural and neuroimaging data revealed significant and unique contributions of both perceptual and conceptual features of humanness, each explaining unique variance in neuroimaging data. Furthermore, our results showed distinct spatio-temporal dynamics in the processing of conceptual and perceptual humanness features, with later and more lateralised brain responses to conceptual features. This study highlights the critical importance of social requirements in information processing and organisation in the human brain

    An empirically driven guide on using Bayes factors for M/EEG decoding

    Get PDF
    Bayes factors can be used to provide quantifiable evidence for contrasting hypotheses and have thus become increasingly popular in cognitive science. However, Bayes factors are rarely used to statistically assess the results of neuroimaging experiments. Here, we provide an empirically driven guide on implementing Bayes factors for time-series neural decoding results. Using real and simulated magnetoencephalography (MEG) data, we examine how parameters such as the shape of the prior and data size affect Bayes factors. Additionally, we discuss the benefits Bayes factors bring to analysing multivariate pattern analysis data and show how using Bayes factors can be used instead or in addition to traditional frequentist approaches

    Decoding images in the mind's eye : the temporal dynamics of visual imagery

    Get PDF
    Mental imagery is the ability to generate images in the mind in the absence of sensory input. Both perceptual visual processing and internally generated imagery engage large, overlapping networks of brain regions. However, it is unclear whether they are characterized by similar temporal dynamics. Recent magnetoencephalography work has shown that object category information was decodable from brain activity during mental imagery, but the timing was delayed relative to perception. The current study builds on these findings, using electroencephalography to investigate the dynamics of mental imagery. Sixteen participants viewed two images of the Sydney Harbour Bridge and two images of Santa Claus. On each trial, they viewed a sequence of the four images and were asked to imagine one of them, which was cued retroactively by its temporal location in the sequence. Time-resolved multivariate pattern analysis was used to decode the viewed and imagined stimuli. Although category and exemplar information was decodable for viewed stimuli, there were no informative patterns of activity during mental imagery. The current findings suggest stimulus complexity, task design and individual differences may influence the ability to successfully decode imagined images. We discuss the implications of these results in the context of prior findings of mental imagery

    Temporal dissociation of neural activity underlying synesthetic and perceptual colors

    Get PDF
    Grapheme-color synesthetes experience color when seeing achromatic symbols. We examined whether similar neural mechanisms underlie color perception and synesthetic colors using magnetoencephalography. Classification models trained on neural activity from viewing colored stimuli could distinguish synesthetic color evoked by achromatic symbols after a delay of ∼100 ms. Our results provide an objective neural signature for synesthetic experience and temporal evidence consistent with higher-level processing in synesthesia

    A humanness dimension to visual object coding in the brain

    Get PDF
    Neuroimaging studies investigating human object recognition have primarily focused on a relatively small number of object categories, in particular, faces, bodies, scenes, and vehicles. More recent studies have taken a broader focus, investigating hypothesized dichotomies, for example, animate versus inanimate, and continuous feature dimensions, such as biologically similarity. These studies typically have used stimuli that are identified as animate or inanimate, neglecting objects that may not fit into this dichotomy. We generated a novel stimulus set including standard objects and objects that blur the animate-inanimate dichotomy, for example, robots and toy animals. We used MEG time-series decoding to study the brain's emerging representation of these objects. Our analysis examined contemporary models of object coding such as dichotomous animacy, as well as several new higher order models that take into account an object's capacity for agency (i.e. its ability to move voluntarily) and capacity to experience the world. We show that early (0–200 ​ms) responses are predicted by the stimulus shape, assessed using a retinotopic model and shape similarity computed from human judgments. Thereafter, higher order models of agency/experience provided a better explanation of the brain's representation of the stimuli. Strikingly, a model of human similarity provided the best account for the brain's representation after an initial perceptual processing phase. Our findings provide evidence for a new dimension of object coding in the human brain – one that has a “human-centric” focus

    Overlapping neural representations for the position of visible and imagined objects

    Full text link
    Humans can covertly track the position of an object, even if the object is temporarily occluded. What are the neural mechanisms underlying our capacity to track moving objects when there is no physical stimulus for the brain to track? One possibility is that the brain 'fills-in' information about imagined objects using internally generated representations similar to those generated by feed-forward perceptual mechanisms. Alternatively, the brain might deploy a higher order mechanism, for example using an object tracking model that integrates visual signals and motion dynamics. In the present study, we used EEG and time-resolved multivariate pattern analyses to investigate the spatial processing of visible and imagined objects. Participants tracked an object that moved in discrete steps around fixation, occupying six consecutive locations. They were asked to imagine that the object continued on the same trajectory after it disappeared and move their attention to the corresponding positions. Time-resolved decoding of EEG data revealed that the location of the visible stimuli could be decoded shortly after image onset, consistent with early retinotopic visual processes. For processing of unseen/imagined positions, the patterns of neural activity resembled stimulus-driven mid-level visual processes, but were detected earlier than perceptual mechanisms, implicating an anticipatory and more variable tracking mechanism. Encoding models revealed that spatial representations were much weaker for imagined than visible stimuli. Monitoring the position of imagined objects thus utilises similar perceptual and attentional processes as monitoring objects that are actually present, but with different temporal dynamics. These results indicate that internally generated representations rely on top-down processes, and their timing is influenced by the predictability of the stimulus.Comment: All data and analysis code for this study are available at https://osf.io/8v47t
    corecore