2,476 research outputs found

    Attentional gain is modulated by probabilistic feature expectations in a spatial cueing task: ERP evidence

    Get PDF
    Several theoretical and empirical studies suggest that attention and perceptual expectations influence perception in an interactive manner, whereby attentional gain is enhanced for predicted stimuli. The current study assessed whether attention and perceptual expectations interface when they are fully orthogonal, i.e., each of them relates to different stimulus features. We used a spatial cueing task with block-wise spatial attention cues that directed attention to either left or right visual field, in which Gabor gratings of either predicted (more likely) or unpredicted (less likely) orientation were presented. The lateralised posterior N1pc component was additively influenced by attention and perceptual expectations. Bayesian analysis showed no reliable evidence for the interactive effect of attention and expectations on the N1pc amplitude. However, attention and perceptual expectations interactively influenced the frontally distributed anterior N1 component (N1a). The attention effect (i.e., enhanced N1a amplitude in the attended compared to the unattended condition) was observed only for the gratings of predicted orientation, but not in the unpredicted condition. These findings suggest that attention and perceptual expectations interactively influence visual processing within 200 ms after stimulus onset and such joint influence may lead to enhanced endogenous attentional control in the dorsal fronto-parietal attention network

    A computational account of threat-related attentional bias

    Get PDF
    Visual selective attention acts as a filter on perceptual information, facilitating learning and inference about important events in an agent’s environment. A role for visual attention in reward-based decisions has previously been demonstrated, but it remains unclear how visual attention is recruited during aversive learning, particularly when learning about multiple stimuli concurrently. This question is of particular importance in psychopathology, where enhanced attention to threat is a putative feature of pathological anxiety. Using an aversive reversal learning task that required subjects to learn, and exploit, predictions about multiple stimuli, we show that the allocation of visual attention is influenced significantly by aversive value but not by uncertainty. Moreover, this relationship is bidirectional in that attention biases value updates for attended stimuli, resulting in heightened value estimates. Our findings have implications for understanding biased attention in psychopathology and support a role for learning in the expression of threat-related attentional biases in anxiety

    Probabilistic modeling of eye movement data during conjunction search via feature-based attention

    Get PDF
    Where the eyes fixate during search is not random; rather, gaze reflects the combination of information about the target and the visual input. It is not clear, however, what information about a target is used to bias the underlying neuronal responses. We here engage subjects in a variety of simple conjunction search tasks while tracking their eye movements. We derive a generative model that reproduces these eye movements and calculate the conditional probabilities that observers fixate, given the target, on or near an item in the display sharing a specific feature with the target. We use these probabilities to infer which features were biased by top-down attention: Color seems to be the dominant stimulus dimension for guiding search, followed by object size, and lastly orientation. We use the number of fixations it took to find the target as a measure of task difficulty. We find that only a model that biases multiple feature dimensions in a hierarchical manner can account for the data. Contrary to common assumptions, memory plays almost no role in search performance. Our model can be fit to average data of multiple subjects or to individual subjects. Small variations of a few key parameters account well for the intersubject differences. The model is compatible with neurophysiological findings of V4 and frontal eye fields (FEF) neurons and predicts the gain modulation of these cells

    Changing ideas about others' intentions: updating prior expectations tunes activity in the human motor system

    Get PDF
    Predicting intentions from observing another agent’s behaviours is often thought to depend on motor resonance – i.e., the motor system’s response to a perceived movement by the activation of its stored motor counterpart, but observers might also rely on prior expectations, especially when actions take place in perceptually uncertain situations. Here we assessed motor resonance during an action prediction task using transcranial magnetic stimulation to probe corticospinal excitability (CSE) and report that experimentally-induced updates in observers’ prior expectations modulate CSE when predictions are made under situations of perceptual uncertainty. We show that prior expectations are updated on the basis of both biomechanical and probabilistic prior information and that the magnitude of the CSE modulation observed across participants is explained by the magnitude of change in their prior expectations. These findings provide the first evidence that when observers predict others’ intentions, motor resonance mechanisms adapt to changes in their prior expectations. We propose that this adaptive adjustment might reflect a regulatory control mechanism that shares some similarities with that observed during action selection. Such a mechanism could help arbitrate the competition between biomechanical and probabilistic prior information when appropriate for prediction

    Sensorimotor Differences in Autism Spectrum Disorder: An evaluation of potential mechanisms.

    Get PDF
    This thesis examined the aetiology of sensorimotor impairments in Autism Spectrum Disorder: a neurodevelopmental condition that affects an individual’s socio-behavioural preferences, personal independence, and quality of life. Issues relating to clumsiness and movement coordination are common features of autism that contribute to wide-ranging daily living difficulties. However, these characteristics are relatively understudied and there is an absence of evidence-based practical interventions. To pave the way for new, scientifically-focused programmes, a series of studies investigated the mechanistic underpinnings of sensorimotor differences in autism. Following a targeted review of previous research, study one explored links between autistic-like traits and numerous conceptually-significant movement control functions. Eye-tracking analyses were integrated with force transducers and motion capture technology to examine how participants interacted with uncertain lifting objects. Upon identifying a link between autistic-like traits and context-sensitive predictive action control, study two replicated these procedures with a sample of clinically-diagnosed participants. Results illustrated that autistic people are able to use predictions to guide object interactions, but that uncertainty-related adjustments in sensorimotor integration are atypical. Such findings were advanced within a novel virtual-reality paradigm in study three, which systematically manipulated environmental uncertainty during naturalistic interception actions. Here, data supported proposals that precision weighting functions are aberrant in autistic people, and suggested that these individuals have difficulties with processing volatile sensory information. These difficulties were not alleviated by the experimental provision of explicit contextual cues in study four. Together, these studies implicate the role of implicit neuromodulatory mechanisms that regulate dynamic sensorimotor behaviours. Results support the development of evidence-based programmes that ‘make the world more predictable’ for autistic people, with various theoretical and practical implications presented. Possible applications of these findings are discussed in relation to recent multi-disciplinary research and conceptual advances in the field, which could help improve daily living skills and functional quality of life.Economic and Social Research Council (ESRC

    Stochastic Prediction of Multi-Agent Interactions from Partial Observations

    Full text link
    We present a method that learns to integrate temporal information, from a learned dynamics model, with ambiguous visual information, from a learned vision model, in the context of interacting agents. Our method is based on a graph-structured variational recurrent neural network (Graph-VRNN), which is trained end-to-end to infer the current state of the (partially observed) world, as well as to forecast future states. We show that our method outperforms various baselines on two sports datasets, one based on real basketball trajectories, and one generated by a soccer game engine.Comment: ICLR 2019 camera read
    corecore