124 research outputs found

    A Deep Predictive Coding Network for Inferring Hierarchical Causes Underlying Sensory Inputs

    Get PDF
    Predictive coding has been argued as a mechanism underlying sensory processing in the brain. In computational models of predictive coding, the brain is described as a machine that constructs and continuously adapts a generative model based on the stimuli received from external environment. It uses this model to infer causes that generated the received stimuli. However, it is not clear how predictive coding can be used to construct deep neural network models of the brain while complying with the architectural constraints imposed by the brain. Here, we describe an algorithm to construct a deep generative model that can be used to infer causes behind the stimuli received from external environment. Specifically, we train a deep neural network on real-world images in an unsupervised learning paradigm. To understand the capacity of the network with regards to modeling the external environment, we studied the causes inferred using the trained model on images of objects that are not used in training. Despite the novel features of these objects the model is able to infer the causes for them. Furthermore, the reconstructions of the original images obtained from the generative model using these inferred causes preserve important details of these objects

    The Construction of Semantic Memory: Grammar-Based Representations Learned from Relational Episodic Information

    Get PDF
    After acquisition, memories underlie a process of consolidation, making them more resistant to interference and brain injury. Memory consolidation involves systems-level interactions, most importantly between the hippocampus and associated structures, which takes part in the initial encoding of memory, and the neocortex, which supports long-term storage. This dichotomy parallels the contrast between episodic memory (tied to the hippocampal formation), collecting an autobiographical stream of experiences, and semantic memory, a repertoire of facts and statistical regularities about the world, involving the neocortex at large. Experimental evidence points to a gradual transformation of memories, following encoding, from an episodic to a semantic character. This may require an exchange of information between different memory modules during inactive periods. We propose a theory for such interactions and for the formation of semantic memory, in which episodic memory is encoded as relational data. Semantic memory is modeled as a modified stochastic grammar, which learns to parse episodic configurations expressed as an association matrix. The grammar produces tree-like representations of episodes, describing the relationships between its main constituents at multiple levels of categorization, based on its current knowledge of world regularities. These regularities are learned by the grammar from episodic memory information, through an expectation-maximization procedure, analogous to the inside–outside algorithm for stochastic context-free grammars. We propose that a Monte-Carlo sampling version of this algorithm can be mapped on the dynamics of “sleep replay” of previously acquired information in the hippocampus and neocortex. We propose that the model can reproduce several properties of semantic memory such as decontextualization, top-down processing, and creation of schemata

    Towards a Unified View on Pathways and Functions of Neural Recurrent Processing

    Get PDF
    There are three neural feedback pathways to the primary visual cortex (V1): corticocortical, pulvinocortical, and cholinergic. What are the respective functions of these three projections? Possible functions range from contextual modulation of stimulus processing and feedback of high-level information to predictive processing (PP). How are these functions subserved by different pathways and can they be integrated into an overarching theoretical framework? We propose that corticocortical and pulvinocortical connections are involved in all three functions, whereas the role of cholinergic projections is limited by their slow response to stimuli. PP provides a broad explanatory framework under which stimulus-context modulation and high-level processing are subsumed, involving multiple feedback pathways that provide mechanisms for inferring and interpreting what sensory inputs are about

    Sensory Processing Across Conscious and Nonconscious Brain States: From Single Neurons to Distributed Networks for Inferential Representation

    Get PDF
    Neuronal activity is markedly different across brain states: it varies from desynchronized activity during wakefulness to the synchronous alternation between active and silent states characteristic of deep sleep. Surprisingly, limited attention has been paid to investigating how brain states affect sensory processing. While it was long assumed that the brain was mostly disconnected from external stimuli during sleep, an increasing number of studies indicates that sensory stimuli continue to be processed across all brain states—albeit differently. In this review article, we first discuss what constitutes a brain state. We argue that—next to global, behavioral states such as wakefulness and sleep—there is a concomitant need to distinguish bouts of oscillatory dynamics with specific global/local activity patterns and lasting for a few hundreds of milliseconds, as these can lead to the same sensory stimulus being either perceived or not. We define these short-lasting bouts as micro-states. We proceed to characterize how sensory-evoked neural responses vary between conscious and nonconscious states. We focus on two complementary aspects: neuronal ensembles and inter-areal communication. First, we review which features of ensemble activity are conducive to perception, and how these features vary across brain states. Properties such as heterogeneity, sparsity and synchronicity in neuronal ensembles will especially be considered as essential correlates of conscious processing. Second, we discuss how inter-areal communication varies across brain states and how this may affect brain operations and sensory processing. Finally, we discuss predictive coding (PC) and the concept of multi-level representations as a key framework for understanding conscious sensory processing. In this framework the brain implements conscious representations as inferences about world states across multiple representational levels. In this representational hierarchy, low-level inference may be carried out nonconsciously, whereas high levels integrate across different sensory modalities and larger spatial scales, correlating with conscious processing. This inferential framework is used to interpret several cellular and population-level findings in the context of brain states, and we briefly compare its implications to two other theories of consciousness. In conclusion, this review article, provides foundations to guide future studies aiming to uncover the mechanisms of sensory processing and perception across brain states

    Deep Gated Hebbian Predictive Coding Accounts for Emergence of Complex Neural Response Properties Along the Visual Cortical Hierarchy

    Get PDF
    Predictive coding provides a computational paradigm for modeling perceptual processing as the construction of representations accounting for causes of sensory inputs. Here, we developed a scalable, deep network architecture for predictive coding that is trained using a gated Hebbian learning rule and mimics the feedforward and feedback connectivity of the cortex. After training on image datasets, the models formed latent representations in higher areas that allowed reconstruction of the original images. We analyzed low- and high-level properties such as orientation selectivity, object selectivity and sparseness of neuronal populations in the model. As reported experimentally, image selectivity increased systematically across ascending areas in the model hierarchy. Depending on the strength of regularization factors, sparseness also increased from lower to higher areas. The results suggest a rationale as to why experimental results on sparseness across the cortical hierarchy have been inconsistent. Finally, representations for different object classes became more distinguishable from lower to higher areas. Thus, deep neural networks trained using a gated Hebbian formulation of predictive coding can reproduce several properties associated with neuronal responses along the visual cortical hierarchy

    Neural Signatures of Intransitive Preferences

    Get PDF
    It is often assumed that decisions are made by rank-ordering and thus comparing the available choice options based on their subjective values. Rank-ordering requires that the alternatives’ subjective values are mentally represented at least on an ordinal scale. Because one alternative cannot be at the same time better and worse than another alternative, choices should satisfy transitivity (if alternative A is preferred over B, and B is preferred over C, A should be preferred over C). Yet, individuals often demonstrate striking violations of transitivity (preferring C over A). We used functional magnetic resonance imaging to study the neural correlates of intransitive choices between gambles varying in magnitude and probability of financial gains. Behavioral intransitivities were common. They occurred because participants did not evaluate the gambles independently, but in comparison with the alternative gamble presented. Neural value signals in prefrontal and parietal cortex were not ordinal-scaled and transitive, but reflected fluctuations in the gambles’ local, pairing-dependent preference-ranks. Detailed behavioral analysis of gamble preferences showed that, depending on the difference in the offered gambles’ attributes, participants gave variable priority to magnitude or probability and thus shifted between preferring richer or safer gambles. The variable, context-dependent priority given to magnitude and probability was tracked by insula (magnitude) and posterior cingulate (probability). Their activation-balance may reflect the individual decision rules leading to intransitivities. Thus, the phenomenon of intransitivity is reflected in the organization of the neural systems involved in risky decision-making

    Deep gated Hebbian predictive coding accounts for emergence of complex neural response properties along the visual cortical hierarchy

    Get PDF
    Predictive coding provides a computational paradigm for modeling perceptual processing as the construction of representations accounting for causes of sensory inputs. Here, we developed a scalable, deep network architecture for predictive coding that is trained using a gated Hebbian learning rule and mimics the feedforward and feedback connectivity of the cortex. After training on image datasets, the models formed latent representations in higher areas that allowed reconstruction of the original images. We analyzed low- and high-level properties such as orientation selectivity, object selectivity and sparseness of neuronal populations in the model. As reported experimentally, image selectivity increased systematically across ascending areas in the model hierarchy. Depending on the strength of regularization factors, sparseness also increased from lower to higher areas. The results suggest a rationale as to why experimental results on sparseness across the cortical hierarchy have been inconsistent. Finally, representations for different object classes became more distinguishable from lower to higher areas. Thus, deep neural networks trained using a gated Hebbian formulation of predictive coding can reproduce several properties associated with neuronal responses along the visual cortical hierarchy

    Local minimization of prediction errors drives learning of invariant object representations in a generative network model of visual perception

    Get PDF
    The ventral visual processing hierarchy of the cortex needs to fulfill at least two key functions: perceived objects must be mapped to high-level representations invariantly of the precise viewing conditions, and a generative model must be learned that allows, for instance, to fill in occluded information guided by visual experience. Here, we show how a multilayered predictive coding network can learn to recognize objects from the bottom up and to generate specific representations via a top-down pathway through a single learning rule: the local minimization of prediction errors. Trained on sequences of continuously transformed objects, neurons in the highest network area become tuned to object identity invariant of precise position, comparable to inferotemporal neurons in macaques. Drawing on this, the dynamic properties of invariant object representations reproduce experimentally observed hierarchies of timescales from low to high levels of the ventral processing stream. The predicted faster decorrelation of error-neuron activity compared to representation neurons is of relevance for the experimental search for neural correlates of prediction errors. Lastly, the generative capacity of the network is confirmed by reconstructing specific object images, robust to partial occlusion of the inputs. By learning invariance from temporal continuity within a generative model, the approach generalizes the predictive coding framework to dynamic inputs in a more biologically plausible way than self-supervised networks with non-local error-backpropagation. This was achieved simply by shifting the training paradigm to dynamic inputs, with little change in architecture and learning rule from static input-reconstructing Hebbian predictive coding networks
    • …
    corecore