346 research outputs found

    Cortical feedback signals generalise across different spatial frequencies of feedforward inputs

    Get PDF
    Visual processing in cortex relies on feedback projections contextualising feedforward information flow. Primary visual cortex (V1) has small receptive fields and processes feedforward information at a fine-grained spatial scale, whereas higher visual areas have larger, spatially invariant receptive fields. Therefore, feedback could provide coarse information about the global scene structure or alternatively recover fine-grained structure by targeting small receptive fields in V1. We tested if feedback signals generalise across different spatial frequencies of feedforward inputs, or if they are tuned to the spatial scale of the visual scene. Using a partial occlusion paradigm, functional magnetic resonance imaging (fMRI) and multivoxel pattern analysis (MVPA) we investigated whether feedback to V1 contains coarse or fine-grained information by manipulating the spatial frequency of the scene surround outside an occluded image portion. We show that feedback transmits both coarse and fine-grained information as it carries information about both low (LSF) and high spatial frequencies (HSF). Further, feedback signals containing LSF information are similar to feedback signals containing HSF information, even without a large overlap in spatial frequency bands of the HSF and LSF scenes. Lastly, we found that feedback carries similar information about the spatial frequency band across different scenes. We conclude that cortical feedback signals contain information which generalises across different spatial frequencies of feedforward inputs

    Forecasting faces in the cortex: Comment on ‘High-level prediction signals in a low-level area of the macaque face-processing hierarchy’, by Schwiedrzik and Freiwald, Neuron (2017)

    Get PDF
    Although theories of predictive coding in the brain abound, we lack key pieces of neuronal data to support these theories. Recently, Schwiedrzik and Freiwald found neurophysiological evidence for predictive codes throughout the face-processing hierarchy in macaque cortex. We highlight how these data enhance our knowledge of cortical information processing, and the impact of this more broadly

    Influence of scene surround on cortical feedback to non-stimulated primary visual cortex

    Get PDF
    Most of the time we are not passively viewing scenes but want to extract behaviourally relevant information. In addition, objects do not often occur in isolation outside the visual scientist’s laboratory but are embedded in complex visual scenes. If the brain is to be adaptive, it needs to process visual information with regards to its context. Thus perception is not purely determined by the specific input to the retina but depends on the surrounding scene, objects, attention, memory, prior knowledge, expectations and predictions. Traditionally, the visual system in the human brain has been viewed as having a hierarchical organisation with signals travelling in one direction: input from the eyes arrives at "lower" order areas, which then transmit their computations to "higher" order areas. As one moves up the hierarchy, visual areas code more complex and more abstract information, and after the final processing stage, the system gives an output. However, in reality things are not so simple. In fact, in the primary visual cortex (V1), which is one of the first visual processing stages in the brain, external stimuli constitute less than 10% of the total input. The rest of the input originates from internal connections, either within V1 itself or via signals arriving from "higher" areas, back down to V1. In this way, "higher" areas can tell "lower" ones about the bigger picture and the neighbouring elements. This internal processing in the brain is the mechanism which provides context and enriches the information reaching us from the external world. The signals arriving to V1 from the retina are referred to as feedforward, while the signals going in the opposite direction, from higher areas back to V1, are called feedback. Each neuron responds to its preferred stimulus in a specific region of the visual field, called the receptive field. Feedforward signals act on the central region of a neuron’s receptive field, while feedback signals act on a larger surround region and thus are able to inform the centre about the surrounding context. However, it is not well established which aspects of the surrounding scene define these contextual interactions. This thesis investigated the influence of the scene surround on feedback to V1. We aimed to establish how the scene surround contributes to informative feedback signals. An introduction about what is already known regarding the function of feedback and the information it transmits is provided in Chapter 1. I give an overview of the previous studies which highlight the various contextual roles of feedback, such as perceptual grouping, contour and object completion, expectation, attention and prediction, as well as being the mechanism allowing visual imagery. Chapter 2 aimed to address whether feedback provides coarse or fine-grained information about the surrounding scene. Since during normal viewing both feedback and feedforward signals are present, we investigated feedback signals in isolation by using a partial occlusion paradigm to remove meaningful feedforward input in a specific region of the scene. We filtered the scene surrounding the occluded region into a fine-grained and a coarse version. We also varied how much information was shared between the fine-grained and coarse version of the same scene. This was done to investigate whether the information feedback carried was tightly tuned to the spatial scale of the surrounding scene, or whether the information it contained was similar across the two types of the scene surround. We found that the feedback contained signals about both coarse and fine-grained surrounds, but there was also some overlap between these feedback signals. In addition, we found that the feedback information did not correspond to a direct "filling-in" of the missing feedforward input, suggesting that feedback and feedforward signals represent the scene in different ways. In Chapter 3 we took a closer look at the amount of meaningful scene surround that is necessary to elicit informative feedback signals. The results showed that increasing the amount of scene information in the surround resulted in more meaningful feedback signals. We confirmed our earlier finding that the feedback information in the occluded region is dissimilar to the corresponding feedforward input when the feedforward region is isolated from the scene surround. Adding the scene surround to the feedforward stimulus increased this feedback/feedforward similarity. Overall, these findings point to the notion that feedback signals combine with feedforward input under normal visual processing. Isolated feedforward input in the absence of the surround provides V1 neurons with impoverished information. Neighbouring elements of the scene or its overall global structure can be sources of context. In Chapter 4 we explored which regions of the scene surround contribute the most to the contextual feedback signals arriving at V1 – is this limited to only local neighbouring regions or does the feedback directly contain information about the overall global image structure, taking into account distant retinotopic regions as well? In the first experiment, we used simple global structures made up of four Gabor elements and showed that such simplistic shapes failed to induce contextual feedback into the occluded region. However, in the presence of feedforward information, we saw that feedback from the local surround combined with identical feedforward input to give rise to different activity patterns in that feedforward region. This suggests that feedback may be recruited differentially depending on whether feedforward stimulation is present or absent. In the second experiment, we used natural scenes and tested whether contextual feedback can originate from a distant retinotopic region in the situation when the local scene surround was not informative. We manipulated scene information in a distant retinotopic region (in the opposite hemisphere) while keeping the local neighbouring surround information the same. The results showed a lack of meaningful feedback in the occluded region, and that feedback from the distant surround had a negligible effect on the identical feedforward information, in contrast to the finding obtained previously with the local surround. These findings suggest that feedback preferentially originates from nearby regions and provides context to disambiguate local feedforward elements. Therefore context about the global scene structure may arise from a series of local surround interactions. Chapter 5 summarises these findings and discusses the overarching themes regarding the content of feedback and its role in full visual processing. At the end, I propose some future research directions

    Computational roles of cortico-cerebellar loops in temporal credit assignment

    Get PDF
    Animal survival depends on behavioural adaptation to the environment. This is thought to be enabled by plasticity in the neural circuit. However, the laws which govern neural plasticity are unclear. From a functional aspect, it is desirable to correctly identify, or assign “credit” for, the neurons or synapses responsible for the task decision and subsequent performance. In the biological circuit, the intricate, non-linear interactions involved in neural networks makes appropriately assigning credit to neurons highly challenging. In the temporal domain, this is known as the temporal credit assignment (TCA) problem. This Thesis considers the role the cerebellum – a powerful subcortical structure with strong error-guided plasticity rules – as a solution to TCA in the brain. In particular, I use artificial neural networks as a means to model and understand the mechanisms by which the cerebellum can support learning in the neocortex via the cortico-cerebellar loop. I introduce two distinct but compatible computational models of cortico-cerebellar interaction. The first model asserts that the cerebellum provides the neocortex predictive feedback, modeled in the form of error gradients, with respect to its current activity. This predictive feedback enables better credit assignment in the neocortex and effectively removes the lock between feedforward and feedback processing in cortical networks. This model captures observed long-term deficits associated with cerebellar dysfunction, namely cerebellar dysmetria, in both the motor and non-motor domain. Predictions are also made with respect to alignment of cortico-cerebellar activity during learning and the optimal task conditions for cerebellar contribution. The second model also looks at the role of the cerebellum in learning, but now considers its ability to instantaneously drive the cortex towards desired task dynamics. Unlike the first model, this model does not assume any local cortical plasticity need take place at all and task-directed learning can effectively be outsourced to the cerebellum. This model captures recent optogenetic studies in mice which show the cerebellum as a necessary component for the maintenance of desired cortical dynamics and ensuing behaviour. I also show that this driving input can eventually be used as a teaching signal for the cortical circuit, thereby conceptually unifying the two models. Overall, this Thesis explores the computational role of the cerebellum and cortico-cerebellar loops for task acquisition and maintenance in the brain

    Auditory cortex modelled as a dynamical network of oscillators: Understanding event-related fields and their adaptation

    Get PDF
    Adaptation, the reduction of neuronal responses by repetitive stimulation, is a ubiquitous feature of auditory cortex (AC). It is not clear what causes adaptation, but short-term synaptic depression (STSD) is a potential candidate for the underlying mechanism. We examined this hypothesis via a computational model based on AC anatomy, which includes serially connected core, belt, and parabelt areas. The model replicates the event-related field (ERF) of the magnetoencephalogram as well as ERF adaptation. The model dynamics are described by excitatory and inhibitory state variables of cell populations, with the excitatory connections modulated by STSD. We analysed the system dynamics by linearizing the firing rates and solving the STSD equation using time-scale separation. This allows for characterization of AC dynamics as a superposition of damped harmonic oscillators, so-called normal modes. We show that repetition suppression of the N1m is due to a mixture of causes, with stimulus repetition modifying both the amplitudes and the frequencies of the normal modes. In this view, adaptation results from a complete reorganization of AC dynamics rather than a reduction of activity in discrete sources. Further, both the network structure and the balance between excitation and inhibition contribute significantly to the rate with which AC recovers from adaptation. This lifetime of adaptation is longer in the belt and parabelt than in the core area, despite the time constants of STSD being spatially constant. Finally, we critically evaluate the use of a single exponential function to describe recovery from adaptation

    Evaluating the neurophysiological evidence for predictive processing as a model of perception

    Get PDF
    For many years, the dominant theoretical framework guiding research into the neural origins of perceptual experience has been provided by hierarchical feedforward models, in which sensory inputs are passed through a series of increasingly complex feature detectors. However, the long‐standing orthodoxy of these accounts has recently been challenged by a radically different set of theories that contend that perception arises from a purely inferential process supported by two distinct classes of neurons: those that transmit predictions about sensory states and those that signal sensory information that deviates from those predictions. Although these predictive processing (PP) models have become increasingly influential in cognitive neuroscience, they are also criticized for lacking the empirical support to justify their status. This limited evidence base partly reflects the considerable methodological challenges that are presented when trying to test the unique predictions of these models. However, a confluence of technological and theoretical advances has prompted a recent surge in human and nonhuman neurophysiological research seeking to fill this empirical gap. Here, we will review this new research and evaluate the degree to which its findings support the key claims of PP
    • 

    corecore