23 research outputs found

    Decoding images in the mind's eye : the temporal dynamics of visual imagery

    Get PDF
    Mental imagery is the ability to generate images in the mind in the absence of sensory input. Both perceptual visual processing and internally generated imagery engage large, overlapping networks of brain regions. However, it is unclear whether they are characterized by similar temporal dynamics. Recent magnetoencephalography work has shown that object category information was decodable from brain activity during mental imagery, but the timing was delayed relative to perception. The current study builds on these findings, using electroencephalography to investigate the dynamics of mental imagery. Sixteen participants viewed two images of the Sydney Harbour Bridge and two images of Santa Claus. On each trial, they viewed a sequence of the four images and were asked to imagine one of them, which was cued retroactively by its temporal location in the sequence. Time-resolved multivariate pattern analysis was used to decode the viewed and imagined stimuli. Although category and exemplar information was decodable for viewed stimuli, there were no informative patterns of activity during mental imagery. The current findings suggest stimulus complexity, task design and individual differences may influence the ability to successfully decode imagined images. We discuss the implications of these results in the context of prior findings of mental imagery

    Overlapping neural representations for the position of visible and imagined objects

    Full text link
    Humans can covertly track the position of an object, even if the object is temporarily occluded. What are the neural mechanisms underlying our capacity to track moving objects when there is no physical stimulus for the brain to track? One possibility is that the brain 'fills-in' information about imagined objects using internally generated representations similar to those generated by feed-forward perceptual mechanisms. Alternatively, the brain might deploy a higher order mechanism, for example using an object tracking model that integrates visual signals and motion dynamics. In the present study, we used EEG and time-resolved multivariate pattern analyses to investigate the spatial processing of visible and imagined objects. Participants tracked an object that moved in discrete steps around fixation, occupying six consecutive locations. They were asked to imagine that the object continued on the same trajectory after it disappeared and move their attention to the corresponding positions. Time-resolved decoding of EEG data revealed that the location of the visible stimuli could be decoded shortly after image onset, consistent with early retinotopic visual processes. For processing of unseen/imagined positions, the patterns of neural activity resembled stimulus-driven mid-level visual processes, but were detected earlier than perceptual mechanisms, implicating an anticipatory and more variable tracking mechanism. Encoding models revealed that spatial representations were much weaker for imagined than visible stimuli. Monitoring the position of imagined objects thus utilises similar perceptual and attentional processes as monitoring objects that are actually present, but with different temporal dynamics. These results indicate that internally generated representations rely on top-down processes, and their timing is influenced by the predictability of the stimulus.Comment: All data and analysis code for this study are available at https://osf.io/8v47t

    Untangling featural and conceptual object representations

    No full text
    How are visual inputs transformed into conceptual representations by the human visual system? The contents of human perception, such as objects presented on a visual display, can reliably be decoded from voxel activation patterns in fMRI, and in evoked sensor activations in MEG and EEG. A prevailing question is the extent to which brain activation associated with object categories is due to statistical regularities of visual features within object categories. Here, we assessed the contribution of mid-level features to conceptual category decoding using EEG and a novel fast periodic decoding paradigm. Our study used a stimulus set consisting of intact objects from the animate (e.g., fish) and inanimate categories (e.g., chair) and scrambled versions of the same objects that were unrecognizable and preserved their visual features (Long et al., 2018). By presenting the images at different periodic rates, we biased processing to different levels of the visual hierarchy. We found that scrambled objects and their intact counterparts elicited similar patterns of activation, which could be used to decode the conceptual category (animate or inanimate), even for the unrecognizable scrambled objects. Animacy decoding for the scrambled objects, however, was only possible at the slowest periodic presentation rate. Animacy decoding for intact objects was faster, more robust, and could be achieved at faster presentation rates. Our results confirm that the mid-level visual features preserved in the scrambled objects contribute to animacy decoding, but also demonstrate that the dynamics vary markedly for intact versus scrambled objects. Our findings suggest a complex interplay between visual feature coding and categorical representations that is mediated by the visual system’s capacity to use image features to resolve a recognisable object

    The neural dynamics underlying prioritisation of task-relevant information

    No full text
    The human brain prioritises relevant sensory information to perform different tasks. Enhancement of task-relevant information requires flexible allocation of attentional resources, but it is still a mystery how this is operationalised in the brain. We investigated how attentional mechanisms operate in situations where multiple stimuli are presented in the same location and at the same time. In two experiments, participants performed a challenging two-back task on different types of visual stimuli that were presented simultaneously and superimposed over each other. Using electroencephalography and multivariate decoding, we analysed the effect of attention on the neural responses to each individual stimulus. Whole brain neural responses contained considerable information about both the attended and unattended stimuli, even though they were presented simultaneously and represented in overlapping receptive fields. As expected, attention increased the decodability of stimulus-related information contained in the neural responses, but this effect was evident earlier for stimuli that were presented at smaller sizes. Our results show that early neural responses to stimuli in fast-changing displays contain remarkable information about the sensory environment but are also modulated by attention in a manner dependent on perceptual characteristics of the relevant stimuli. Stimuli, code, and data for this study can be found at https://osf.io/7zhwp/

    Feature conjunction analyses.

    No full text
    A) Logic for the conjunction decoding analysis. Decoding classes came from two levels of two different features (e.g., contrast and orientation), so that each class could not be separated by differences in each feature alone. Classification was performed on groups of stimuli that together contained the same visual features (e.g., specific contrast and orientation) but varied on feature combinations. In this example, both classes contain two contrast levels and two orientations (and four colours and four SFs), so the only way to distinguish between them is by a neural response to the conjunction of contrast and orientation. B) Dynamics of feature conjunction coding for each feature combination. Onsets and peaks of individual features and conjunctions are plotted with 95% confidence intervals; onsets are below the chance level and peaks are above. Bayes Factors below each plot reflect the evidence for above-chance decoding; black circles are BF>10 and grey circles are BF<0.1. All results shown from 6.67Hz presentation rate. Note the different y-axis scales per row.</p

    Dynamics of visual coding for orientation, spatial frequency, colour, and contrast at different stimulus presentation rates.

    No full text
    A) The time course of decoding accuracy at 6.67Hz and 20 Hz presentation rates. Confidence intervals for onsets and peaks of individual features are plotted above the decoding traces. The head maps show the channel clusters with the highest feature information at the peak of decoding, based on results from a channel searchlight analysis (window size: 5 channels). Bayes Factors for classification evidence compared to chance (0.25) are plotted below. Coding peaked in order from contrast, then colour and spatial frequency, followed by orientation. The dynamics of feature coding were very similar regardless of presentation rate, though there was numerically higher classification accuracy for 6.67Hz compared to 20Hz. B) Time x time generalisation analyses for 6.67Hz condition show different above-chance and below-chance dynamics for each feature.</p

    Representational similarity analysis.

    No full text
    A) Models (representational dissimilarity matrices) based on stimulus orientation, spatial frequency, colour and contrast, and the perceptual model which was based on perceptual similarity judgements. B) Correlations between the feature models (and their conjunctions) and the perceptual similarity model. C) Similarity between stimuli based on perceptual similarity judgements, plotted using multi-dimensional scaling. The distance between stimuli reflects perceptual similarity, where images that are closer together tended to be judged as more similar perceptually. D) Correlations between neural RDMs and the behavioural model for 6.67Hz and 20Hz presentation rates. From early stages of processing, there are high correlations with the perceptual similarity model.</p

    Sensor searchlight decoding of each feature across time, 20 Hz condition (window size: 5 channels).

    No full text
    Sensor searchlight decoding of each feature across time, 20 Hz condition (window size: 5 channels).</p
    corecore