59,930 research outputs found

    Contextual modulation of primary visual cortex by auditory signals

    Get PDF
    Early visual cortex receives non-feedforward input from lateral and top-down connections (Muckli & Petro 2013 Curr. Opin. Neurobiol. 23, 195–201. (doi:10.1016/j.conb.2013.01.020)), including long-range projections from auditory areas. Early visual cortex can code for high-level auditory information, with neural patterns representing natural sound stimulation (Vetter et al. 2014 Curr. Biol. 24, 1256–1262. (doi:10.1016/j.cub.2014.04.020)). We discuss a number of questions arising from these findings. What is the adaptive function of bimodal representations in visual cortex? What type of information projects from auditory to visual cortex? What are the anatomical constraints of auditory information in V1, for example, periphery versus fovea, superficial versus deep cortical layers? Is there a putative neural mechanism we can infer from human neuroimaging data and recent theoretical accounts of cortex? We also present data showing we can read out high-level auditory information from the activation patterns of early visual cortex even when visual cortex receives simple visual stimulation, suggesting independent channels for visual and auditory signals in V1. We speculate which cellular mechanisms allow V1 to be contextually modulated by auditory input to facilitate perception, cognition and behaviour. Beyond cortical feedback that facilitates perception, we argue that there is also feedback serving counterfactual processing during imagery, dreaming and mind wandering, which is not relevant for immediate perception but for behaviour and cognition over a longer time frame. This article is part of the themed issue ‘Auditory and visual scene analysis’

    Impaired contextual modulation of memories in PTSD: an fMRI and psychophysiological study of extinction retention and fear renewal

    Get PDF
    Post-traumatic stress disorder (PTSD) patients display pervasive fear memories, expressed indiscriminately. Proposed mechanisms include enhanced fear learning and impaired extinction or extinction recall. Documented extinction recall deficits and failure to use safety signals could result from general failure to use contextual information, a hippocampus-dependent process. This can be probed by adding a renewal phase to standard conditioning and extinction paradigms. Human subjects with PTSD and combat controls were conditioned (skin conductance response), extinguished, and tested for extinction retention and renewal in a scanner (fMRI). Fear conditioning (light paired with shock) occurred in one context, followed by extinction in another, to create danger and safety contexts. The next day, the extinguished conditioned stimulus (CS+E) was re-presented to assess extinction recall (safety context) and fear renewal (danger context). PTSD patients showed impaired extinction recall, with increased skin conductance and heightened amygdala activity to the extinguished CS+ in the safety context. However, they also showed impaired fear renewal; in the danger context, they had less skin conductance response to CS+E and lower activity in amygdala and ventral-medial prefrontal cortex compared with combat controls. Control subjects displayed appropriate contextual modulation of memory recall, with extinction (safety) memory prevailing in the safety context, and fear memory prevailing in the danger context. PTSD patients could not use safety context to sustain suppression of extinguished fear memory, but they also less effectively used danger context to enhance fear. They did not display globally enhanced fear expression, but rather showed a globally diminished capacity to use contextual information to modulate fear expression

    Perception of 3D Slant Out of the Box

    Get PDF
    Evidence for contextual effects is widespread in visual perception. Although this suggests that contextual effects are the result of a generic property of the visual system, current explanations are limited to the domain in which they occur. In this paper we propose a more general mechanism of global influences on the perception of slant. We review empirical data and evaluate proposed explanations of contextual biases. By assessing not only a model about three-dimensional slant perception but also evaluating more generic mechanisms of contextual modulation, we show that surround suppression of neural responses explains the major phenomena in the empirical data on contextual biases. Moreover, contextual biases may be part of a mechanism of grouping and segmentation. © 2011 van der Kooij and te Pas

    Brain rhythms of pain

    Get PDF
    Pain is an integrative phenomenon that results from dynamic interactions between sensory and contextual (i.e., cognitive, emotional, and motivational) processes. In the brain the experience of pain is associated with neuronal oscillations and synchrony at different frequencies. However, an overarching framework for the significance of oscillations for pain remains lacking. Recent concepts relate oscillations at different frequencies to the routing of information flow in the brain and the signaling of predictions and prediction errors. The application of these concepts to pain promises insights into how flexible routing of information flow coordinates diverse processes that merge into the experience of pain. Such insights might have implications for the understanding and treatment of chronic pain

    Attentional Enhancement of Auditory Mismatch Responses: a DCM/MEG Study.

    Get PDF
    Despite similar behavioral effects, attention and expectation influence evoked responses differently: Attention typically enhances event-related responses, whereas expectation reduces them. This dissociation has been reconciled under predictive coding, where prediction errors are weighted by precision associated with attentional modulation. Here, we tested the predictive coding account of attention and expectation using magnetoencephalography and modeling. Temporal attention and sensory expectation were orthogonally manipulated in an auditory mismatch paradigm, revealing opposing effects on evoked response amplitude. Mismatch negativity (MMN) was enhanced by attention, speaking against its supposedly pre-attentive nature. This interaction effect was modeled in a canonical microcircuit using dynamic causal modeling, comparing models with modulation of extrinsic and intrinsic connectivity at different levels of the auditory hierarchy. While MMN was explained by recursive interplay of sensory predictions and prediction errors, attention was linked to the gain of inhibitory interneurons, consistent with its modulation of sensory precision

    Predictive Encoding of Contextual Relationships for Perceptual Inference, Interpolation and Prediction

    Full text link
    We propose a new neurally-inspired model that can learn to encode the global relationship context of visual events across time and space and to use the contextual information to modulate the analysis by synthesis process in a predictive coding framework. The model learns latent contextual representations by maximizing the predictability of visual events based on local and global contextual information through both top-down and bottom-up processes. In contrast to standard predictive coding models, the prediction error in this model is used to update the contextual representation but does not alter the feedforward input for the next layer, and is thus more consistent with neurophysiological observations. We establish the computational feasibility of this model by demonstrating its ability in several aspects. We show that our model can outperform state-of-art performances of gated Boltzmann machines (GBM) in estimation of contextual information. Our model can also interpolate missing events or predict future events in image sequences while simultaneously estimating contextual information. We show it achieves state-of-art performances in terms of prediction accuracy in a variety of tasks and possesses the ability to interpolate missing frames, a function that is lacking in GBM
    corecore