21 research outputs found

    Deep gated Hebbian predictive coding accounts for emergence of complex neural response properties along the visual cortical hierarchy

    Get PDF
    Predictive coding provides a computational paradigm for modeling perceptual processing as the construction of representations accounting for causes of sensory inputs. Here, we developed a scalable, deep network architecture for predictive coding that is trained using a gated Hebbian learning rule and mimics the feedforward and feedback connectivity of the cortex. After training on image datasets, the models formed latent representations in higher areas that allowed reconstruction of the original images. We analyzed low- and high-level properties such as orientation selectivity, object selectivity and sparseness of neuronal populations in the model. As reported experimentally, image selectivity increased systematically across ascending areas in the model hierarchy. Depending on the strength of regularization factors, sparseness also increased from lower to higher areas. The results suggest a rationale as to why experimental results on sparseness across the cortical hierarchy have been inconsistent. Finally, representations for different object classes became more distinguishable from lower to higher areas. Thus, deep neural networks trained using a gated Hebbian formulation of predictive coding can reproduce several properties associated with neuronal responses along the visual cortical hierarchy

    Local minimization of prediction errors drives learning of invariant object representations in a generative network model of visual perception

    Get PDF
    The ventral visual processing hierarchy of the cortex needs to fulfill at least two key functions: perceived objects must be mapped to high-level representations invariantly of the precise viewing conditions, and a generative model must be learned that allows, for instance, to fill in occluded information guided by visual experience. Here, we show how a multilayered predictive coding network can learn to recognize objects from the bottom up and to generate specific representations via a top-down pathway through a single learning rule: the local minimization of prediction errors. Trained on sequences of continuously transformed objects, neurons in the highest network area become tuned to object identity invariant of precise position, comparable to inferotemporal neurons in macaques. Drawing on this, the dynamic properties of invariant object representations reproduce experimentally observed hierarchies of timescales from low to high levels of the ventral processing stream. The predicted faster decorrelation of error-neuron activity compared to representation neurons is of relevance for the experimental search for neural correlates of prediction errors. Lastly, the generative capacity of the network is confirmed by reconstructing specific object images, robust to partial occlusion of the inputs. By learning invariance from temporal continuity within a generative model, the approach generalizes the predictive coding framework to dynamic inputs in a more biologically plausible way than self-supervised networks with non-local error-backpropagation. This was achieved simply by shifting the training paradigm to dynamic inputs, with little change in architecture and learning rule from static input-reconstructing Hebbian predictive coding networks

    Multimodal Representation Learning for Place Recognition Using Deep Hebbian Predictive Coding

    Get PDF
    Recognising familiar places is a competence required in many engineering applications that interact with the real world such as robot navigation. Combining information from different sensory sources promotes robustness and accuracy of place recognition. However, mismatch in data registration, dimensionality, and timing between modalities remain challenging problems in multisensory place recognition. Spurious data generated by sensor drop-out in multisensory environments is particularly problematic and often resolved through adhoc and brittle solutions. An effective approach to these problems is demonstrated by animals as they gracefully move through the world. Therefore, we take a neuro-ethological approach by adopting self-supervised representation learning based on a neuroscientific model of visual cortex known as predictive coding. We demonstrate how this parsimonious network algorithm which is trained using a local learning rule can be extended to combine visual and tactile sensory cues from a biomimetic robot as it naturally explores a visually aliased environment. The place recognition performance obtained using joint latent representations generated by the network is significantly better than contemporary representation learning techniques. Further, we see evidence of improved robustness at place recognition in face of unimodal sensor drop-out. The proposed multimodal deep predictive coding algorithm presented is also linearly extensible to accommodate more than two sensory modalities, thereby providing an intriguing example of the value of neuro-biologically plausible representation learning for multimodal navigation

    The Hippocampus Is Coupled with the Default Network during Memory Retrieval but Not during Memory Encoding

    Get PDF
    The brain's default mode network (DMN) is activated during internally-oriented tasks and shows strong coherence in spontaneous rest activity. Despite a surge of recent interest, the functional role of the DMN remains poorly understood. Interestingly, the DMN activates during retrieval of past events but deactivates during encoding of novel events into memory. One hypothesis is that these opposing effects reflect a difference between attentional orienting towards internal events, such as retrieved memories, vs. external events, such as to-be-encoded stimuli. Another hypothesis is that hippocampal regions are coupled with the DMN during retrieval but decoupled from the DMN during encoding. The present fMRI study investigated these two hypotheses by combining a resting-state coherence analysis with a task that measured the encoding and retrieval of both internally-generated and externally-presented events. Results revealed that the main DMN regions were activated during retrieval but deactivated during encoding. Counter to the internal orienting hypothesis, this pattern was not modulated by whether memory events were internal or external. Consistent with the hippocampal coupling hypothesis, the hippocampus behaved like other DMN regions during retrieval but not during encoding. Taken together, our findings clarify the relationship between the DMN and the neural correlates of memory retrieval and encoding

    ALLOSTATIC CONTROL FOR ROBOT BEHAVIOR REGULATION: A COMPARATIVE RODENT-ROBOT STUDY

    No full text
    Rodents are optimal real-world foragers that regulate internal states maintaining a dynamic stability with their surroundings. How these internal drive based behaviors are regulated remains unclear. Based on the physiological notion of allostasis, we investigate a minimal control system able to approximate their behavior. Allostasis is the process of achieving stability with the environment through change, opposed to homeostasis which achieves it through constancy. Following this principle, the so-called allostatic control system orchestrates the interaction of the homeostatic modules by changing their desired values in order to achieve stability. We use a minimal number of subsystems and estimate the model parameters from rat behavioral data in three experimental setups: free exploration, presence of reward, delivery of cues with reward predictive value. From this analysis, we show that a rat is influenced by the shape of the arena in terms of its openness. We then use the estimated model configurations to control a simulated and real robot which captures essential properties of the observed rat behavior. The allostatic reactive control model is proposed as an augmentation of the Distributed Adaptive Control architecture and provides a further contribution towards the realization of an artificial rodent.Homeostasis, allostasis, rodent behavior, behavioral-based robotics
    corecore