5,098 research outputs found

    Specific disruption of hippocampal mossy fiber synapses in a mouse model of familial Alzheimer's disease.

    Get PDF
    The earliest stages of Alzheimer's disease (AD) are characterized by deficits in memory and cognition indicating hippocampal pathology. While it is now recognized that synapse dysfunction precedes the hallmark pathological findings of AD, it is unclear if specific hippocampal synapses are particularly vulnerable. Since the mossy fiber (MF) synapse between dentate gyrus (DG) and CA3 regions underlies critical functions disrupted in AD, we utilized serial block-face electron microscopy (SBEM) to analyze MF microcircuitry in a mouse model of familial Alzheimer's disease (FAD). FAD mutant MF terminal complexes were severely disrupted compared to control - they were smaller, contacted fewer postsynaptic spines and had greater numbers of presynaptic filopodial processes. Multi-headed CA3 dendritic spines in the FAD mutant condition were reduced in complexity and had significantly smaller sites of synaptic contact. Significantly, there was no change in the volume of classical dendritic spines at neighboring inputs to CA3 neurons suggesting input-specific defects in the early course of AD related pathology. These data indicate a specific vulnerability of the DG-CA3 network in AD pathogenesis and demonstrate the utility of SBEM to assess circuit specific alterations in mouse models of human disease

    Robust spatial memory maps encoded in networks with transient connections

    Full text link
    The spiking activity of principal cells in mammalian hippocampus encodes an internalized neuronal representation of the ambient space---a cognitive map. Once learned, such a map enables the animal to navigate a given environment for a long period. However, the neuronal substrate that produces this map remains transient: the synaptic connections in the hippocampus and in the downstream neuronal networks never cease to form and to deteriorate at a rapid rate. How can the brain maintain a robust, reliable representation of space using a network that constantly changes its architecture? Here, we demonstrate, using novel Algebraic Topology techniques, that cognitive map's stability is a generic, emergent phenomenon. The model allows evaluating the effect produced by specific physiological parameters, e.g., the distribution of connections' decay times, on the properties of the cognitive map as a whole. It also points out that spatial memory deterioration caused by weakening or excessive loss of the synaptic connections may be compensated by simulating the neuronal activity. Lastly, the model explicates functional importance of the complementary learning systems for processing spatial information at different levels of spatiotemporal granularity, by establishing three complementary timescales at which spatial information unfolds. Thus, the model provides a principal insight into how can the brain develop a reliable representation of the world, learn and retain memories despite complex plasticity of the underlying networks and allows studying how instabilities and memory deterioration mechanisms may affect learning process.Comment: 24 pages, 10 figures, 4 supplementary figure

    Replay as wavefronts and theta sequences as bump oscillations in a grid cell attractor network.

    Get PDF
    Grid cells fire in sequences that represent rapid trajectories in space. During locomotion, theta sequences encode sweeps in position starting slightly behind the animal and ending ahead of it. During quiescence and slow wave sleep, bouts of synchronized activity represent long trajectories called replays, which are well-established in place cells and have been recently reported in grid cells. Theta sequences and replay are hypothesized to facilitate many cognitive functions, but their underlying mechanisms are unknown. One mechanism proposed for grid cell formation is the continuous attractor network. We demonstrate that this established architecture naturally produces theta sequences and replay as distinct consequences of modulating external input. Driving inhibitory interneurons at the theta frequency causes attractor bumps to oscillate in speed and size, which gives rise to theta sequences and phase precession, respectively. Decreasing input drive to all neurons produces traveling wavefronts of activity that are decoded as replays

    Lifelong Learning of Spatiotemporal Representations with Dual-Memory Recurrent Self-Organization

    Get PDF
    Artificial autonomous agents and robots interacting in complex environments are required to continually acquire and fine-tune knowledge over sustained periods of time. The ability to learn from continuous streams of information is referred to as lifelong learning and represents a long-standing challenge for neural network models due to catastrophic forgetting. Computational models of lifelong learning typically alleviate catastrophic forgetting in experimental scenarios with given datasets of static images and limited complexity, thereby differing significantly from the conditions artificial agents are exposed to. In more natural settings, sequential information may become progressively available over time and access to previous experience may be restricted. In this paper, we propose a dual-memory self-organizing architecture for lifelong learning scenarios. The architecture comprises two growing recurrent networks with the complementary tasks of learning object instances (episodic memory) and categories (semantic memory). Both growing networks can expand in response to novel sensory experience: the episodic memory learns fine-grained spatiotemporal representations of object instances in an unsupervised fashion while the semantic memory uses task-relevant signals to regulate structural plasticity levels and develop more compact representations from episodic experience. For the consolidation of knowledge in the absence of external sensory input, the episodic memory periodically replays trajectories of neural reactivations. We evaluate the proposed model on the CORe50 benchmark dataset for continuous object recognition, showing that we significantly outperform current methods of lifelong learning in three different incremental learning scenario
    corecore