570 research outputs found

    Position representations of moving objects align with real-time position in the early visual response

    Get PDF
    When interacting with the dynamic world, the brain receives outdated sensory information, due to the time required for neural transmission and processing. In motion perception, the brain may overcome these fundamental delays through predictively encoding the position of moving objects using information from their past trajectories. In the present study, we evaluated this proposition using multivariate analysis of high temporal resolution electroencephalographic data. We tracked neural position representations of moving objects at different stages of visual processing, relative to the real-time position of the object. During early stimulus-evoked activity, position representations of moving objects were activated substantially earlier than the equivalent activity evoked by unpredictable flashes, aligning the earliest representations of moving stimuli with their real-time positions. These findings indicate that the predictability of straight trajectories enables full compensation for the neural delays accumulated early in stimulus processing, but that delays still accumulate across later stages of cortical processing

    Overlapping neural representations for the position of visible and imagined objects

    Full text link
    Humans can covertly track the position of an object, even if the object is temporarily occluded. What are the neural mechanisms underlying our capacity to track moving objects when there is no physical stimulus for the brain to track? One possibility is that the brain 'fills-in' information about imagined objects using internally generated representations similar to those generated by feed-forward perceptual mechanisms. Alternatively, the brain might deploy a higher order mechanism, for example using an object tracking model that integrates visual signals and motion dynamics. In the present study, we used EEG and time-resolved multivariate pattern analyses to investigate the spatial processing of visible and imagined objects. Participants tracked an object that moved in discrete steps around fixation, occupying six consecutive locations. They were asked to imagine that the object continued on the same trajectory after it disappeared and move their attention to the corresponding positions. Time-resolved decoding of EEG data revealed that the location of the visible stimuli could be decoded shortly after image onset, consistent with early retinotopic visual processes. For processing of unseen/imagined positions, the patterns of neural activity resembled stimulus-driven mid-level visual processes, but were detected earlier than perceptual mechanisms, implicating an anticipatory and more variable tracking mechanism. Encoding models revealed that spatial representations were much weaker for imagined than visible stimuli. Monitoring the position of imagined objects thus utilises similar perceptual and attentional processes as monitoring objects that are actually present, but with different temporal dynamics. These results indicate that internally generated representations rely on top-down processes, and their timing is influenced by the predictability of the stimulus.Comment: All data and analysis code for this study are available at https://osf.io/8v47t

    Expectation suppression across sensory modalitites: a MEG investigation

    Get PDF
    140 p.In the last few decades, a lot of research focus has been to understand how the human brain generates expectation about the incoming sensory responses and how it deals with surprise or unpredictable input. It is evident in predictive processing literature that the human brain suppresses the neural responses to predictable/expected stimuli (termed as expectation suppression effect). This thesis provide evidence to how expectation suppression is affected by content-based expectations (what) and temporal uncertainty (when) across sensory modalities (visual and auditory) using state-of-art Magnetoencephalography (MEG) imaging. The result shows that visual domain is more sensitive to content-based expectations (what) more than the timing (when), also visual domain shows sensitivity to timing (when) only if what was predictable. However, Auditory domain is equally sensitive to what and when features, showing enhanced suppression to expectation compared to visual domain. This thesis concludes conclude that the sensory modalities deal differently with the contextual expectations and temporal predictability. This suggests that while investigating predictive processing in the human brain, the modality specific differences should be considered, since the predictive mechanism at work in one domain should not necessarily be generalized to other domains as well

    Predictive coding of visual motion in both monocular and binocular human visual processing

    Get PDF
    Neural processing of sensory input in the brain takes time, and for that reason our awareness of visual events lags behind their actual occurrence. One way the brain might compensate to minimize the impact of the resulting delays is through extrapolation. Extrapolation mechanisms have been argued to underlie perceptual illusions in which moving and static stimuli are mislocalised relative to one another (such as the flash-lag and related effects). However, where in the visual hierarchy such extrapolation processes take place remains unknown. Here, we address this question by identifying monocular and binocular contributions to the flash-grab illusion. In this illusion, a brief target is flashed on a moving background that reverses direction. As a result, the perceived position of the target is shifted in the direction of the reversal. We show that the illusion is attenuated, but not eliminated, when the motion reversal and the target are presented dichoptically to separate eyes. This reveals extrapolation mechanisms at both monocular and binocular processing stages contribute to the illusion. We interpret the results in a hierarchical predictive coding framework, and argue that prediction errors in this framework manifest directly as perceptual illusions

    Change blindness: eradication of gestalt strategies

    Get PDF
    Arrays of eight, texture-defined rectangles were used as stimuli in a one-shot change blindness (CB) task where there was a 50% chance that one rectangle would change orientation between two successive presentations separated by an interval. CB was eliminated by cueing the target rectangle in the first stimulus, reduced by cueing in the interval and unaffected by cueing in the second presentation. This supports the idea that a representation was formed that persisted through the interval before being 'overwritten' by the second presentation (Landman et al, 2003 Vision Research 43149–164]. Another possibility is that participants used some kind of grouping or Gestalt strategy. To test this we changed the spatial position of the rectangles in the second presentation by shifting them along imaginary spokes (by ±1 degree) emanating from the central fixation point. There was no significant difference seen in performance between this and the standard task [F(1,4)=2.565, p=0.185]. This may suggest two things: (i) Gestalt grouping is not used as a strategy in these tasks, and (ii) it gives further weight to the argument that objects may be stored and retrieved from a pre-attentional store during this task

    The cognitive neuroscience of visual working memory

    Get PDF
    Visual working memory allows us to temporarily maintain and manipulate visual information in order to solve a task. The study of the brain mechanisms underlying this function began more than half a century ago, with Scoville and Milner’s (1957) seminal discoveries with amnesic patients. This timely collection of papers brings together diverse perspectives on the cognitive neuroscience of visual working memory from multiple fields that have traditionally been fairly disjointed: human neuroimaging, electrophysiological, behavioural and animal lesion studies, investigating both the developing and the adult brain

    Getting ahead: Prediction as a window into language, and language as a window into the predictive brain

    Get PDF

    Estimating the subjective perception of object size and position through brain imaging and psychophysics

    Get PDF
    Perception is subjective and context-dependent. Size and position perception are no exceptions. Studies have shown that apparent object size is represented by the retinotopic location of peak response in V1. Such representation is likely supported by a combination of V1 architecture and top-down driven retinotopic reorganisation. Are apparent object size and position encoded via a common mechanism? Using functional magnetic resonance imaging and a model-based reconstruction technique, the first part of this thesis sets out to test if retinotopic encoding of size percepts can be generalised to apparent position representation and whether neural signatures could be used to predict an individual’s perceptual experience. Here, I present evidence that static apparent position – induced by a dot-variant Muller-Lyer illusion – is represented retinotopically in V1. However, there is mixed evidence for retinotopic representation of motion-induced position shifts (e.g. curveball illusion) in early visual areas. My findings could be reconciled by assuming dual representation of veridical and percept-based information in early visual areas, which is consistent with the larger framework of predictive coding. The second part of the thesis sets out to compare different psychophysical methods for measuring size perception in the Ebbinghaus illusion. Consistent with the idea that psychophysical methods are not equally susceptible to cognitive factors, my experiments reveal a consistent discrepancy in illusion magnitude estimates between a traditional forced choice (2AFC) task and a novel perceptual matching (PM) task – a variant of a comparison-of-comparisons (CoC) task, a design widely seen as the gold standard in psychophysics. Further investigation reveals the difference was not driven by greater 2AFC susceptibility to cognitive factors, but a tendency for PM to skew illusion magnitude estimates towards the underlying stimulus distribution. I show that this dependency can be largely corrected using adaptive stimulus sampling

    Decoding motor intentions from human brain activity

    Get PDF
    “You read my mind.” Although this simple everyday expression implies ‘knowledge or understanding’ of another’s thinking, true ‘mind-reading’ capabilities implicitly seem constrained to the domains of Hollywood and science-fiction. In the field of sensorimotor neuroscience, however, significant progress in this area has come from mapping characteristic changes in brain activity that occur prior to an action being initiated. For instance, invasive neural recordings in non-human primates have significantly increased our understanding of how highly cognitive and abstract processes like intentions and decisions are represented in the brain by showing that it is possible to decode or ‘predict’ upcoming sensorimotor behaviors (e.g., movements of the arm/eyes) based on preceding changes in the neuronal output of parieto-frontal cortex, a network of areas critical for motor planning. In the human brain, however, a successful counterpart for this predictive ability and a similar detailed understanding of intention-related signals in parieto-frontal cortex have remained largely unattainable due to the limitations of non-invasive brain mapping techniques like functional magnetic resonance imaging (fMRI). Knowing how and where in the human brain intentions or plans for action are coded is not only important for understanding the neuroanatomical organization and cortical mechanisms that govern goal-directed behaviours like reaching, grasping and looking – movements critical to our interactions with the world – but also for understanding homologies between human and non-human primate brain areas, allowing the transfer of neural findings between species. In the current thesis, I employed multi-voxel pattern analysis (MVPA), a new fMRI technique that has made it possible to examine the coding of neural information at a more fine-grained level than that previously available. I used fMRI MVPA to examine how and where movement intentions are coded in human parieto-frontal cortex and specifically asked the question: What types of predictive information about a subject\u27s upcoming movement can be decoded from preceding changes in neural activity? Project 1 first used fMRI MVPA to determine, largely as a proof-of-concept, whether or not specific object-directed hand actions (grasps and reaches) could be predicted from intention-related brain activity patterns. Next, Project 2 examined whether effector-specific (arm vs. eye) movement plans along with their intended directions (left vs. right) could also be decoded prior to movement. Lastly, Project 3 examined exactly where in the human brain higher-level movement goals were represented independently from how those goals were to be implemented. To this aim, Project 3 had subjects either grasp or reach toward an object (two different motor goals) using either their hand or a novel tool (with kinematics opposite to those of the hand). In this way, the goal of the action (grasping vs. reaching) could be maintained across actions, but the way in which those actions were kinematically achieved changed in accordance with the effector (hand or tool). All three projects employed a similar event-related delayed-movement fMRI paradigm that separated in time planning and execution neural responses, allowing us to isolate the preparatory patterns of brain activity that form prior to movement. Project 1 found that the plan-related activity patterns in several parieto-frontal brain regions were predictive of different upcoming hand movements (grasps vs. reaches). Moreover, we found that several parieto-frontal brain regions, similar to that only previously demonstrated in non-human primates, could actually be characterized according to the types of movements they can decode. Project 2 found a variety of functional subdivisions: some parieto-frontal areas discriminated movement plans for the different reach directions, some for the different eye movement directions, and a few areas accurately predicted upcoming directional movements for both the hand and eye. This latter finding demonstrates -- similar to that shown previously in non-human primates -- that some brain areas code for the end motor goal (i.e., target location) independent of effector used. Project 3 identified regions that decoded upcoming hand actions only, upcoming tool actions only, and rather interestingly, areas that predicted actions with both effectors (hand and tool). Notably, some of these latter areas were found to represent the higher-level goals of the movement (grasping vs. reaching) instead of the specific lower-level kinematics (hand vs. tool) necessary to implement those goals. Taken together, these findings offer substantial new insights into the types of intention-related signals contained in human brain activity patterns and specify a hierarchical neural architecture spanning parieto-frontal cortex that guides the construction of complex object-directed behaviors
    corecore