5,992 research outputs found
A roadmap to integrate astrocytes into Systems Neuroscience.
Systems neuroscience is still mainly a neuronal field, despite the plethora of evidence supporting the fact that astrocytes modulate local neural circuits, networks, and complex behaviors. In this article, we sought to identify which types of studies are necessary to establish whether astrocytes, beyond their well-documented homeostatic and metabolic functions, perform computations implementing mathematical algorithms that sub-serve coding and higher-brain functions. First, we reviewed Systems-like studies that include astrocytes in order to identify computational operations that these cells may perform, using Ca2+ transients as their encoding language. The analysis suggests that astrocytes may carry out canonical computations in a time scale of subseconds to seconds in sensory processing, neuromodulation, brain state, memory formation, fear, and complex homeostatic reflexes. Next, we propose a list of actions to gain insight into the outstanding question of which variables are encoded by such computations. The application of statistical analyses based on machine learning, such as dimensionality reduction and decoding in the context of complex behaviors, combined with connectomics of astrocyte-neuronal circuits, is, in our view, fundamental undertakings. We also discuss technical and analytical approaches to study neuronal and astrocytic populations simultaneously, and the inclusion of astrocytes in advanced modeling of neural circuits, as well as in theories currently under exploration such as predictive coding and energy-efficient coding. Clarifying the relationship between astrocytic Ca2+ and brain coding may represent a leap forward toward novel approaches in the study of astrocytes in health and disease
Training Spiking Neural Networks Using Lessons From Deep Learning
The brain is the perfect place to look for inspiration to develop more
efficient neural networks. The inner workings of our synapses and neurons
provide a glimpse at what the future of deep learning might look like. This
paper serves as a tutorial and perspective showing how to apply the lessons
learnt from several decades of research in deep learning, gradient descent,
backpropagation and neuroscience to biologically plausible spiking neural
neural networks. We also explore the delicate interplay between encoding data
as spikes and the learning process; the challenges and solutions of applying
gradient-based learning to spiking neural networks; the subtle link between
temporal backpropagation and spike timing dependent plasticity, and how deep
learning might move towards biologically plausible online learning. Some ideas
are well accepted and commonly used amongst the neuromorphic engineering
community, while others are presented or justified for the first time here. A
series of companion interactive tutorials complementary to this paper using our
Python package, snnTorch, are also made available:
https://snntorch.readthedocs.io/en/latest/tutorials/index.htm
Recommended from our members
Pain: A Precision Signal for Reinforcement Learning and Control.
Since noxious stimulation usually leads to the perception of pain, pain has traditionally been considered sensory nociception. But its variability and sensitivity to a broad array of cognitive and motivational factors have meant it is commonly viewed as inherently imprecise and intangibly subjective. However, the core function of pain is motivational-to direct both short- and long-term behavior away from harm. Here, we illustrate that a reinforcement learning model of pain offers a mechanistic understanding of how the brain supports this, illustrating the underlying computational architecture of the pain system. Importantly, it explains why pain is tuned by multiple factors and necessarily supported by a distributed network of brain regions, recasting pain as a precise and objectifiable control signal
DREAM Architecture: a Developmental Approach to Open-Ended Learning in Robotics
Robots are still limited to controlled conditions, that the robot designer
knows with enough details to endow the robot with the appropriate models or
behaviors. Learning algorithms add some flexibility with the ability to
discover the appropriate behavior given either some demonstrations or a reward
to guide its exploration with a reinforcement learning algorithm. Reinforcement
learning algorithms rely on the definition of state and action spaces that
define reachable behaviors. Their adaptation capability critically depends on
the representations of these spaces: small and discrete spaces result in fast
learning while large and continuous spaces are challenging and either require a
long training period or prevent the robot from converging to an appropriate
behavior. Beside the operational cycle of policy execution and the learning
cycle, which works at a slower time scale to acquire new policies, we introduce
the redescription cycle, a third cycle working at an even slower time scale to
generate or adapt the required representations to the robot, its environment
and the task. We introduce the challenges raised by this cycle and we present
DREAM (Deferred Restructuring of Experience in Autonomous Machines), a
developmental cognitive architecture to bootstrap this redescription process
stage by stage, build new state representations with appropriate motivations,
and transfer the acquired knowledge across domains or tasks or even across
robots. We describe results obtained so far with this approach and end up with
a discussion of the questions it raises in Neuroscience
Tonic Dopamine Modulates Exploitation of Reward Learning
The impact of dopamine on adaptive behavior in a naturalistic environment is largely unexamined. Experimental work suggests that phasic dopamine is central to reinforcement learning whereas tonic dopamine may modulate performance without altering learning per se; however, this idea has not been developed formally or integrated with computational models of dopamine function. We quantitatively evaluate the role of tonic dopamine in these functions by studying the behavior of hyperdopaminergic DAT knockdown mice in an instrumental task in a semi-naturalistic homecage environment. In this âclosed economyâ paradigm, subjects earn all of their food by pressing either of two levers, but the relative cost for food on each lever shifts frequently. Compared to wild-type mice, hyperdopaminergic mice allocate more lever presses on high-cost levers, thus working harder to earn a given amount of food and maintain their body weight. However, both groups show a similarly quick reaction to shifts in lever cost, suggesting that the hyperdominergic mice are not slower at detecting changes, as with a learning deficit. We fit the lever choice data using reinforcement learning models to assess the distinction between acquisition and expression the models formalize. In these analyses, hyperdopaminergic mice displayed normal learning from recent reward history but diminished capacity to exploit this learning: a reduced coupling between choice and reward history. These data suggest that dopamine modulates the degree to which prior learning biases action selection and consequently alters the expression of learned, motivated behavior
Effectiveness of Two Keyboarding Instructional Approaches on the Keyboarding Speed, Accuracy, and Technique of Elementary Students
Background: Keyboarding skill development is important for elementary students. Limited research exists to inform practice on effective keyboarding instruction methods.
Method: Using a quasi-experimental design, we examined the effectiveness of Keyboarding Without TearsÂź (n = 786) in the experimental schools compared to the control schools who used the district standard instructional approach of free web-based activities (n = 953) on improving keyboarding skills (speed, accuracy, and technique) in elementary students.
Results: The results showed significant improvements in keyboarding speed and accuracy in all schools for all grades favoring the experimental schools compared to the control schools. Significant differences in improvements in keyboarding technique were found with large effect sizes favoring the experimental schools for kindergarten to the second grade and small effect sizes favoring the control schools for the third to fifth grade.
Conclusion: Professionals involved in assisting with keyboarding skill development in children are recommended to begin training in these skills in early elementary grades, especially to assist in proper keyboarding technique development. While using free web-based activities are beneficial to improving keyboarding speed and accuracy, as well as keyboarding technique, using a developmentally-based curriculum, such as Keyboarding Without TearsÂź, may further enhance improvements in the keyboarding skills of elementary students
Forgetting and the Value of Social Information
Information is everywhere in nature, however it can be deceitful or incorrect, so not all information should be used. Foraging pollinators utilize variable and ephemeral resources so learning about patch quality and nectar replenishment rates are essential to success and survival. However, remembering information after it is no longer relevant is not advantageous. It has been theorized that a pollinatorâs memory should reflect their environment. Bumblebees are known to use both personal information (information gathered through trial and error) and social information (information gained through observations of or interactions with other animals or their products) in foraging decisions; however, it is currently unknown how social and personal information are valued in bumblebee memory. We conducted an experiment to illuminate the rate at which bumblebees (Bombus impatiens)learn and forget personal and social information. We manipulated the value of social and personal information by varying their reliabilities, and tested the retention of that learned information after 4, 8, and 24 hours. We found that social information is retained better than personal information, and retention decreases as time since learning increases. This experiment is a first step toward elucidating when social or personal information is more valuable to a forager
If deep learning is the answer, then what is the question?
Neuroscience research is undergoing a minor revolution. Recent advances in
machine learning and artificial intelligence (AI) research have opened up new
ways of thinking about neural computation. Many researchers are excited by the
possibility that deep neural networks may offer theories of perception,
cognition and action for biological brains. This perspective has the potential
to radically reshape our approach to understanding neural systems, because the
computations performed by deep networks are learned from experience, not
endowed by the researcher. If so, how can neuroscientists use deep networks to
model and understand biological brains? What is the outlook for neuroscientists
who seek to characterise computations or neural codes, or who wish to
understand perception, attention, memory, and executive functions? In this
Perspective, our goal is to offer a roadmap for systems neuroscience research
in the age of deep learning. We discuss the conceptual and methodological
challenges of comparing behaviour, learning dynamics, and neural representation
in artificial and biological systems. We highlight new research questions that
have emerged for neuroscience as a direct consequence of recent advances in
machine learning.Comment: 4 Figures, 17 Page
- âŠ