14 research outputs found

    Feature-specific reaction times reveal a semanticisation of memories over time and with repeated remembering

    Get PDF
    Memories are thought to undergo an episodic-to-semantic transformation in the course of their consolidation. We here test if repeated recall induces a similar semanticisation, and if the resulting qualitative changes in memories can be measured using simple feature-specific reaction time probes. Participants studied associations between verbs and object images, and then repeatedly recalled the objects when cued with the verb, immediately and after a two-day delay. Reaction times during immediate recall demonstrate that conceptual features are accessed faster than perceptual features. Consistent with a semanticisation process, this perceptual-conceptual gap significantly increases across the delay. A significantly smaller perceptual-conceptual gap is found in the delayed recall data of a control group who repeatedly studied the verb-object pairings on the first day, instead of actively recalling them. Our findings suggest that wake recall and offline consolidation interact to transform memories over time, strengthening meaningful semantic information over perceptual detail

    Decoding dynamic brain patterns from evoked responses: A tutorial on multivariate pattern analysis applied to time-series neuroimaging data

    Get PDF
    Multivariate pattern analysis (MVPA) or brain decoding methods have become standard practice in analysing fMRI data. Although decoding methods have been extensively applied in Brain Computing Interfaces (BCI), these methods have only recently been applied to time-series neuroimaging data such as MEG and EEG to address experimental questions in Cognitive Neuroscience. In a tutorial-style review, we describe a broad set of options to inform future time-series decoding studies from a Cognitive Neuroscience perspective. Using example MEG data, we illustrate the effects that different options in the decoding analysis pipeline can have on experimental results where the aim is to 'decode' different perceptual stimuli or cognitive states over time from dynamic brain activation patterns. We show that decisions made at both preprocessing (e.g., dimensionality reduction, subsampling, trial averaging) and decoding (e.g., classifier selection, cross-validation design) stages of the analysis can significantly affect the results. In addition to standard decoding, we describe extensions to MVPA for time-varying neuroimaging data including representational similarity analysis, temporal generalisation, and the interpretation of classifier weight maps. Finally, we outline important caveats in the design and interpretation of time-series decoding experiments.Comment: 64 pages, 15 figure

    Emerging object representations in the visual system predict reaction times for categorization

    No full text
    Recognizing an object takes just a fraction of a second, less than the blink of an eye. Applying multivariate pattern analysis, or “brain decoding”, methods to magnetoencephalography (MEG) data has allowed researchers to characterize, in high temporal resolution, the emerging representation of object categories that underlie our capacity for rapid recognition. Shortly after stimulus onset, object exemplars cluster by category in a high-dimensional activation space in the brain. In this emerging activation space, the decodability of exemplar category varies over time, reflecting the brain’s transformation of visual inputs into coherent category representations. How do these emerging representations relate to categorization behavior? Recently it has been proposed that the distance of an exemplar representation from a categorical boundary in an activation space is critical for perceptual decision-making, and that reaction times should therefore correlate with distance from the boundary. The predictions of this distance hypothesis have been born out in human inferior temporal cortex (IT), an area of the brain crucial for the representation of object categories. When viewed in the context of a time varying neural signal, the optimal time to “read out” category information is when category representations in the brain are most decodable. Here, we show that the distance from a decision boundary through activation space, as measured using MEG decoding methods, correlates with reaction times for visual categorization during the period of peak decodability. Our results suggest that the brain begins to read out information about exemplar category at the optimal time for use in choice behaviour, and support the hypothesis that the structure of the representation for objects in the visual system is partially constitutive of the decision process in recognition.18 page(s

    Shades Of Meaning: Capturing Meaningful Context-Based Variations In Neural Patterns

    Get PDF
    When cognitive psychologists and psycholinguists consider the variability that arises during the retrieval of conceptual information, this variability it is often understood to arise from the dynamic interactions between concepts and contexts. �When cognitive neuroscientists and neurolinguists think about this variability, it is typically treated as noise and discarded from the analyses. In this dissertation, we bridge these two traditions by asking: can the variability in neural patterns evoked by word meanings reflect the contextual variation that occurs during conceptual processing? We employ functional magnetic resonance imaging (fMRI) to measure, quantify, and predict brain activity during context-dependent retrieval of word meanings. Across three experiments, we test the ways in which word-evoked neural variability is influenced by the sentence context in which the word appears (Chapter 2); the current set of task demands (Chapter 3); or even undirected thoughts about other concepts (Chapter 4). Our findings indicate that not only do the neural patterns evoked by the same stimulus word vary over time, but we can predict the degree to which these patterns vary using meaningful, theoretically motivated variables. These results demonstrate that cross-context, within-concept variations in neural responses are not exclusively due to statistical noise or measurement error. Rather, the degree of a concept’s neural variability varies in a manner that accords with a context-dependent view of semantic representation. In addition, we present preliminary evidence that prefrontally-mediated cognitive control processes are involved in expression of context-appropriate neural patterns. In sum, these studies provide a novel perspective on the flexibility of word meanings and the variable brain activity patterns associated with them

    Comparing computational models of vision to human behaviour

    Get PDF
    Biological vision and computational models of vision can be split into three independent components (image description, decision process, and image set). The thesis presented here aimed to investigate the influence of each of these core components on computational model’s similarity to human behaviour. Chapter 3 investigated the similarity of different computational image descriptors to their biological counterparts, using an image matching task. The results showed that several of the computational models could explain a significant amount of the variance in human performance on individual images. The deep supervised convolutional neural net explained the most variance, followed by GIST, HMAX and then PHOW. Chapter 4 investigated which computational decision process best explained observers’ behaviour on an image categorization task. The results showed that Decision Bound theory produced behaviour the closest to that of observers. This was followed by Exemplar theory and Prototype theory. Chapter 5 examined whether the naturally differing image set between computational models and observers could partially account for the difference in their behaviour. The results showed that, indeed, the naturally differing image set between computational models and observers was affecting the similarity of their behaviour. This gap did not alter which image descriptor best fit observers’ behaviour and could be reduced by training observers on the image set the computational models were using. Chapter 6 investigated, using computational models of vision, the impact of the neighbouring (masking) images on the target images in a RSVP task. This was done by combining the neighbouring images with the target image for the computational models’ simulation for each trial. The results showed that models behaviour became closer to that of the human observers when the neighbouring mask images were included in the computational simulations, as would be expected given an integration period for neural mechanisms. This thesis has shown that computational models can show quite similar behaviours to human observers, even at the level of how they perform with individual images. While this shows the potential utility in computational models as a tool to study visual processing, It has also shown the need to take into account many aspects of the overall model of the visual process and task; not only the image description, but the task requirements, the decision processes, the images being used as stimuli and even the sequence in which they are presented

    Comparing computational models of vision to human behaviour

    Get PDF
    Biological vision and computational models of vision can be split into three independent components (image description, decision process, and image set). The thesis presented here aimed to investigate the influence of each of these core components on computational model’s similarity to human behaviour. Chapter 3 investigated the similarity of different computational image descriptors to their biological counterparts, using an image matching task. The results showed that several of the computational models could explain a significant amount of the variance in human performance on individual images. The deep supervised convolutional neural net explained the most variance, followed by GIST, HMAX and then PHOW. Chapter 4 investigated which computational decision process best explained observers’ behaviour on an image categorization task. The results showed that Decision Bound theory produced behaviour the closest to that of observers. This was followed by Exemplar theory and Prototype theory. Chapter 5 examined whether the naturally differing image set between computational models and observers could partially account for the difference in their behaviour. The results showed that, indeed, the naturally differing image set between computational models and observers was affecting the similarity of their behaviour. This gap did not alter which image descriptor best fit observers’ behaviour and could be reduced by training observers on the image set the computational models were using. Chapter 6 investigated, using computational models of vision, the impact of the neighbouring (masking) images on the target images in a RSVP task. This was done by combining the neighbouring images with the target image for the computational models’ simulation for each trial. The results showed that models behaviour became closer to that of the human observers when the neighbouring mask images were included in the computational simulations, as would be expected given an integration period for neural mechanisms. This thesis has shown that computational models can show quite similar behaviours to human observers, even at the level of how they perform with individual images. While this shows the potential utility in computational models as a tool to study visual processing, It has also shown the need to take into account many aspects of the overall model of the visual process and task; not only the image description, but the task requirements, the decision processes, the images being used as stimuli and even the sequence in which they are presented

    Electrophysiological Correlates of Reversal Processes in Ambiguous Figure Perception

    Get PDF
    Ambiguous figures provide an opportunity to study fluctuations in perceptual experience without corresponding changes in sensory input. Researchers have taken great interest in the mechanisms that generate them using electrophysiology because of the potential to track these processes across time as perceptual reversals of these figures unfold. This work has highlighted brain activity both before and after perceptual reversals involving a wide range of mechanisms. Some of the known electrophysiological correlates of perceptual reversals, like the Reversal Negativity (RN) and Reversal Positivity (RP), have the potential to be explained by demands of the tasks used to elicit them. In addition, many findings on perceptual reversals and ambiguous figure interpretation originate from studies using univariate analyses that do not take full advantage of the multivariate nature of EEG data. In four experiments, I used psychophysics, ERP, and multivariate pattern analysis (MVPA) of EEG data to address the interpretation of two reversal-related ERP components and to identify new multivariate correlates of perceptual reversals. First, I found that the reversal-related RP only appears when reversals are response targets. This result suggests that the RP is not a pure correlate of perceptual processing of endogenous perceptual reversals but rather may reflect response-related monitoring processes. Second, using MVPA, I found that activity linked to perceptual reversals in the post-stimulus period is an ongoing process that involves a wide range of frequency bands (1-30Hz) and spans over a substantial amount of time (~550 ms). Finally, I found that I was able to isolate in time and frequency pre- stimulus activity that is predictive of the upcoming subjective interpretation of the ambiguous stimulus and of perceptual reversals. Overall, these results provide a new interpretation for some extant reversal-related electrophysiological markers as well as identify a set of new phenomena to guide neural theories of perceptual reversals going forward
    corecore