1,826 research outputs found
The coordinating influence of thalamic nucleus reuniens on sleep oscillations in cortical and hippocampal structures – relevance to memory consolidation and sleep structure
Sleep is a fascinating and a bit mysterious behavior. Not only do so called “higher” animals like mammals sleep but also simpler organisms like jellyfish display rhythmic periods of quiescence which are interpreted as sleep. Despite it being almost ubiquitous across the animal kingdom, the function of sleep is still not fully understood. However, we do know that especially the brain is important for the initiation and maintenance of that state and that it is highly active during sleep. There has been a special focus on electric neuro-oscillations where research over the last 90 years has revealed that the brain displays quite distinct oscillatory patterns during sleep and its specific functions are slowly being brought to light, such as memory consolidation and communication between different brain regions. For example, it has been argued that newly formed memories are either stored in the hippocampus or at least dependent on it for reactivation and are later transferred to the neocortex or become independent of the hippocampus while being stabilized in the cortex, with a portion of the thalamus, the nucleus reuniens thalami, being possibly involved in this process as it is an anatomical relay between cortex and hippocampus. The aim of my PhD project was to investigate the coupling of neuro-oscillations between prefrontal cortex, thalamus, and hippocampus in both a descriptive and manipulative way. Namely, we investigated the coupling between prelimbic cortex, nucleus reuniens of the thalamus and the CA1 portion of the hippocampus during unperturbed natural sleep, sleep after sleep deprivation and sleep with increased mnemonic demands after a learning task. Lastly, we optogenetically manipulated nucleus reuniens during sleep to assess its properties as a synchronizing link between prefrontal cortex and hippocampus. We described the coupling of corticothalamic slow waves and spindles with ripples in the hippocampus by quantifying the amount of co-occurrence of the aforementioned events, describing the phase-locking of ripples to slow waves and spindles, and determining which oscillations drives the other. Next we found that spiking behavior of nucleus reuniens is coupled to ripples and cortical slow waves. Lastly, optogenetic manipulation showed that nucleus reuniens is involved in the precise phase-event coupling, in the co-occurrence of the mentioned events, and oscillatory drive between cortex and hippocampus. However, the effects we found on the neuro-oscillatory coupling were not accompanied by a change in memory performance after a learning task
Consolidation of long-term memory: Evidence and alternatives.
Memory loss in retrograde amnesia has long been held to be larger for recent periods than for remote periods, a pattern usually referred to as the Ribot gradient. One explanation for this gradient is consolidation of long-term memories. Several computational models of such a process have shown how consolidation can explain characteristics of amnesia, but they have not elucidated how consolidation must be envisaged. Here findings are reviewed that shed light on how consolidation may be implemented in the brain. Moreover, consolidation is contrasted with alternative theories of the Ribot gradient. Consolidation theory, multiple trace theory, and semantization can all handle some findings well but not others. Conclusive evidence for or against consolidation thus remains to be found
Continual Lifelong Learning with Neural Networks: A Review
Humans and animals have the ability to continually acquire, fine-tune, and
transfer knowledge and skills throughout their lifespan. This ability, referred
to as lifelong learning, is mediated by a rich set of neurocognitive mechanisms
that together contribute to the development and specialization of our
sensorimotor skills as well as to long-term memory consolidation and retrieval.
Consequently, lifelong learning capabilities are crucial for autonomous agents
interacting in the real world and processing continuous streams of information.
However, lifelong learning remains a long-standing challenge for machine
learning and neural network models since the continual acquisition of
incrementally available information from non-stationary data distributions
generally leads to catastrophic forgetting or interference. This limitation
represents a major drawback for state-of-the-art deep neural network models
that typically learn representations from stationary batches of training data,
thus without accounting for situations in which information becomes
incrementally available over time. In this review, we critically summarize the
main challenges linked to lifelong learning for artificial learning systems and
compare existing neural network approaches that alleviate, to different
extents, catastrophic forgetting. We discuss well-established and emerging
research motivated by lifelong learning factors in biological systems such as
structural plasticity, memory replay, curriculum and transfer learning,
intrinsic motivation, and multisensory integration
Delayed Onset of a Daytime Nap Facilitates Retention of Declarative Memory
BACKGROUND: Learning followed by a period of sleep, even as little as a nap, promotes memory consolidation. It is now generally recognized that sleep facilitates the stabilization of information acquired prior to sleep. However, the temporal nature of the effect of sleep on retention of declarative memory is yet to be understood. We examined the impact of a delayed nap onset on the recognition of neutral pictorial stimuli with an added spatial component. METHODOLOGY/PRINCIPAL FINDINGS: Participants completed an initial study session involving 150 neutral pictures of people, places, and objects. Immediately following the picture presentation, participants were asked to make recognition judgments on a subset of "old", previously seen, pictures versus intermixed "new" pictures. Participants were then divided into one of four groups who either took a 90-minute nap immediately, 2 hours, or 4 hours after learning, or remained awake for the duration of the experiment. 6 hours after initial learning, participants were again tested on the remaining "old" pictures, with "new" pictures intermixed. CONCLUSIONS/SIGNIFICANCE: Interestingly, we found a stabilizing benefit of sleep on the memory trace reflected as a significant negative correlation between the average time elapsed before napping and decline in performance from test to retest (p = .001). We found a significant interaction between the groups and their performance from test to retest (p = .010), with the 4-hour delay group performing significantly better than both those who slept immediately and those who remained awake (p = .044, p = .010, respectively). Analysis of sleep data revealed a significant positive correlation between amount of slow wave sleep (SWS) achieved and length of the delay before sleep onset (p = .048). The findings add to the understanding of memory processing in humans, suggesting that factors such as waking processing and homeostatic increases in need for sleep over time modulate the importance of sleep to consolidation of neutral declarative memories
Slow wave sleep in naps supports episodice memories in early childhood
Naps have been shown to benefit visuospatial learning in early childhood. This benefit has been associated with sleep spindles during the nap. However, whether young children\u27s naps and their accompanying physiology benefit other forms of declarative learning is unknown. Using a novel storybook task, we found performance in children (N = 22, mean age = 51.23 months) was better following a nap compared to performance following an equivalent interval spent awake. Moreover, performance remained better the following day if a nap followed learning. Change in post‐nap performance was positively associated with the amount of time spent in slow wave sleep during the nap. This suggests that slow wave sleep in naps may support episodic memory consolidation in early childhood. Taken in conjunction with prior work, these results suggest that multiple features of brain physiology during naps may contribute to declarative memory processing in early childhood
FedET: A Communication-Efficient Federated Class-Incremental Learning Framework Based on Enhanced Transformer
Federated Learning (FL) has been widely concerned for it enables
decentralized learning while ensuring data privacy. However, most existing
methods unrealistically assume that the classes encountered by local clients
are fixed over time. After learning new classes, this assumption will make the
model's catastrophic forgetting of old classes significantly severe. Moreover,
due to the limitation of communication cost, it is challenging to use
large-scale models in FL, which will affect the prediction accuracy. To address
these challenges, we propose a novel framework, Federated Enhanced Transformer
(FedET), which simultaneously achieves high accuracy and low communication
cost. Specifically, FedET uses Enhancer, a tiny module, to absorb and
communicate new knowledge, and applies pre-trained Transformers combined with
different Enhancers to ensure high precision on various tasks. To address local
forgetting caused by new classes of new tasks and global forgetting brought by
non-i.i.d (non-independent and identically distributed) class imbalance across
different local clients, we proposed an Enhancer distillation method to modify
the imbalance between old and new knowledge and repair the non-i.i.d. problem.
Experimental results demonstrate that FedET's average accuracy on
representative benchmark datasets is 14.1% higher than the state-of-the-art
method, while FedET saves 90% of the communication cost compared to the
previous method.Comment: Accepted by 2023 International Joint Conference on Artificial
Intelligence (IJCAI2023
Deciphering the Firing Patterns of Hippocampal Neurons During Sharp-Wave Ripples
The hippocampus is essential for learning and memory. Neurons in the rat hippocampus selectively fire when the animal is at specific locations - place fields - within an environment. Place fields corresponding to such place cells tile the entire environment, forming a stable spatial map supporting navigation and planning. Remarkably, the same place cells reactivate together outside of their place fields and in coincidence with sharp-wave ripples (SWRs) - dominant electrical field oscillations (150-250 Hz) in the hippocampus. These offline SWR events frequently occur during quiet wake periods in the middle of exploration and the follow-up slow-wave sleep and are associated with spatial memory performance and stabilization of spatial maps. Therefore, deciphering the firing patterns during these events is essential to understanding offline memory processing.I provide two novel methods to analyze the SWRs firing patterns in this dissertation project. The first method uses hidden Markov models (HMM), in which I model the dynamics of neural activity during SWRs in terms of transitions between distinct states of neuronal ensemble activity. This method detects consistent temporal structures over many instances of SWRs and, in contrast to standard approaches, relaxes the dependence on positional data during the behavior to interpret temporal patterns during SWRs. To validate this method, I applied the method to quiet wake SWRs. In a simple spatial memory task in which the animal ran on a linear track or in an open arena, the individual states corresponded to the activation of distinct group of neurons with inter-state transitions that resembled the animal’s trajectories during the exploration. In other words, this method enabled us to identify the topology and spatial map of the explored environment by dissecting the firings occurring during the quiescence periods’ SWRs. This result indicated that downstream brain regions may rely only on SWRs to uncover hippocampal code as a substrate for memory processing. I developed a second analysis method based on the principles of Bayesian learning. This method enabled us to track the spatial tunings over the sleep following exploration of an environment by taking neurons’ place fields in the environment as the prior belief and updating it using dynamic ensemble firing patterns unfolding over time. This method introduces a neuronal-ensemble-based approach that calculates tunings to the position encoded by ensemble firings during sleep rather than the animal’s actual position during exploration. When I applied this method to several datasets, I found that during the early slow-wave sleep after an experience, but not during late hours of sleep or sleep before the exploration, the spatial tunings highly resembled the place fields on the track. Furthermore, the fidelity of the spatial tunings to the place fields predicted the place fields’ stability when the animal was re-exposed to the same environment after ~ 9h. Moreover, even for neurons with shifted place fields during re-exposure, the spatial tunings during early sleep were predictive of the place fields during the re-exposure. These results indicated that early sleep actively maintains or retunes the place fields of neurons, explaining the representational drift of place fields across multiple exposures
- …