11 research outputs found

    Automated mapping of virtual environments with visual predictive coding

    Full text link
    Humans construct internal cognitive maps of their environment directly from sensory inputs without access to a system of explicit coordinates or distance measurements. While machine learning algorithms like SLAM utilize specialized visual inference procedures to identify visual features and construct spatial maps from visual and odometry data, the general nature of cognitive maps in the brain suggests a unified mapping algorithmic strategy that can generalize to auditory, tactile, and linguistic inputs. Here, we demonstrate that predictive coding provides a natural and versatile neural network algorithm for constructing spatial maps using sensory data. We introduce a framework in which an agent navigates a virtual environment while engaging in visual predictive coding using a self-attention-equipped convolutional neural network. While learning a next image prediction task, the agent automatically constructs an internal representation of the environment that quantitatively reflects distances. The internal map enables the agent to pinpoint its location relative to landmarks using only visual information.The predictive coding network generates a vectorized encoding of the environment that supports vector navigation where individual latent space units delineate localized, overlapping neighborhoods in the environment. Broadly, our work introduces predictive coding as a unified algorithmic framework for constructing cognitive maps that can naturally extend to the mapping of auditory, sensorimotor, and linguistic inputs

    Neural Networks with Recurrent Generative Feedback

    Get PDF
    Neural networks are vulnerable to input perturbations such as additive noise and adversarial attacks. In contrast, human perception is much more robust to such perturbations. The Bayesian brain hypothesis states that human brains use an internal generative model to update the posterior beliefs of the sensory input. This mechanism can be interpreted as a form of self-consistency between the maximum a posteriori (MAP) estimation of an internal generative model and the external environment. Inspired by such hypothesis, we enforce self-consistency in neural networks by incorporating generative recurrent feedback. We instantiate this design on convolutional neural networks (CNNs). The proposed framework, termed Convolutional Neural Networks with Feedback (CNN-F), introduces a generative feedback with latent variables to existing CNN architectures, where consistent predictions are made through alternating MAP inference under a Bayesian framework. In the experiments, CNN-F shows considerably improved adversarial robustness over conventional feedforward CNNs on standard benchmarks

    Neural Networks with Recurrent Generative Feedback

    Get PDF
    Neural networks are vulnerable to input perturbations such as additive noise and adversarial attacks. In contrast, human perception is much more robust to such perturbations. The Bayesian brain hypothesis states that human brains use an internal generative model to update the posterior beliefs of the sensory input. This mechanism can be interpreted as a form of self-consistency between the maximum a posteriori (MAP) estimation of an internal generative model and the external environment. Inspired by such hypothesis, we enforce self-consistency in neural networks by incorporating generative recurrent feedback. We instantiate this design on convolutional neural networks (CNNs). The proposed framework, termed Convolutional Neural Networks with Feedback (CNN-F), introduces a generative feedback with latent variables to existing CNN architectures, where consistent predictions are made through alternating MAP inference under a Bayesian framework. In the experiments, CNN-F shows considerably improved adversarial robustness over conventional feedforward CNNs on standard benchmarks.Comment: NeurIPS 202

    A brain network basis of Fragile X syndrome behavioral penetrance determined by X chromosome inactivation in female mice

    No full text
    X-chromosome inactivation (XCI) in females is vital for normal brain function and cognition, as many X-linked genetic mutations lead to mental retardation and autism spectrum disorders, such as the fragile X syndrome (FXS). However, the degree by which XCI regulates disease presentation has been poorly investigated. To study this regulation in the mouse, here we quantified the brainwide composition of active-XC cells at single cell resolution using an X-linked MECP2-EGFP allele with known parent-of-origin. We present evidence that whole-brains, including all regions, on average favor maternal XC-active cells by 20%, or 8 million cells. This bias was conserved in heterozygous FXS mutant mice, which also corresponded to disease penetrance in maternal but not paternal FMR1 null mice. To localize the physical source of behavioral penetrance, brain-wide correlational screens successfully mapped mouse performance to cell densities in putative sensorimotor (e.g. sensory hindbrain, thalamus, globus pallidus) and sociability (e.g. visual/entorhinal cortices, bed nucleus stria terminalis, medial preoptic area) behavioral circuits of the open field sensorimotor and 3-chamber sociability assays, respectively. Overall, 50%/50% healthy/mutant cell density ratios in these brain networks were required for disease presentation in each behavior. These results suggest female X-linked behavioral penetrance of disease is regulated at the distributed level of mutant cell density in behavioral circuits, which is set by XCI that is subject to parent-of-origin effects. This work provides a novel explanation behind the broad and varied behavioral phenotypes commonly featured in female patients debilitated with X-linked mental disorders and may offer new entry points for behavioral therapeutics
    corecore