26 research outputs found

    Action selection in the rhythmic brain: The role of the basal ganglia and tremor.

    Get PDF
    Low-frequency oscillatory activity has been the target of extensive research both in cortical structures and in the basal ganglia (BG), due to numerous reports of associations with brain disorders and the normal functioning of the brain. Additionally, a plethora of evidence and theoretical work indicates that the BG might be the locus where conflicts between prospective actions are being resolved. Whereas a number of computational models of the BG investigate these phenomena, these models tend to focus on intrinsic oscillatory mechanisms, neglecting evidence that points to the cortex as the origin of this oscillatory behaviour. In this thesis, we construct a detailed neural model of the complete BG circuit based on fine-tuned spiking neurons, with both electrical and chemical synapses as well as short-term plasticity between structures. To do so, we build a complete suite of computational tools for the design, optimization and simulation of spiking neural networks. Our model successfully reproduces firing and oscillatory behaviour found in both the healthy and Parkinsonian BG, and it was used to make a number of biologically-plausible predictions. First, we investigate the influence of various cortical frequency bands on the intrinsic effective connectivity of the BG, as well as the role of the latter in regulating cortical behaviour. We found that, indeed, effective connectivity changes dramatically for different cortical frequency bands and phase offsets, which are able to modulate (or even block) information flow in the three major BG pathways. Our results indicate the existence of a multimodal gating mechanism at the level of the BG that can be entirely controlled by cortical oscillations, and provide evidence for the hypothesis of cortically-entrained but locally-generated subthalamic beta activity. Next, we explore the relationship of wave properties of entrained cortical inputs, dopamine and the transient effectiveness of the BG, when viewed as an action selection device. We found that cortical frequency, phase, dopamine and the examined time scale, all have a very important impact on the ability of our model to select. Our simulations resulted in a canonical profile of selectivity, which we termed selectivity portraits. Taking together, our results suggest that the cortex is the structure that determines whether action selection will be performed and what strategy will be utilized while the role of the BG is to perform this selection. Some frequency ranges promote the exploitation of actions of whom the outcome is known, others promote the exploration of new actions with high uncertainty while the remaining frequencies simply deactivate selection. Based on this behaviour, we propose a metaphor according to which, the basal ganglia can be viewed as the ''gearbox" of the cortex. Coalitions of rhythmic cortical areas are able to switch between a repertoire of available BG modes which, in turn, change the course of information flow back to and within the cortex. In the same context, dopamine can be likened to the ''control pedals" of action selection that either stop or initiate a decision. Finally, the frequency of active cortical areas that project to the BG acts as a gear lever, that instead of controlling the type and direction of thrust that the throttle provides to an automobile, it dictates the extent to which dopamine can trigger a decision, as well as what type of decision this will be. Finally, we identify a selection cycle with a period of around 200 ms, which was used to assess the biological plausibility of the most popular architectures in cognitive science. Using extensions of the BG model, we further propose novel mechanisms that provide explanations for (1) the two distinctive dynamical behaviours of neurons in globus pallidus external, and (2) the generation of resting tremor in Parkinson's disease. Our findings agree well with experimental observations, suggest new insights into the pathophysiology of specific BG disorders, provide new justifications for oscillatory phenomena related to decision making and reaffirm the role of the BG as the selection centre of the brain.Open Acces

    Sample as You Infer: Predictive Coding With Langevin Dynamics

    Full text link
    We present a novel algorithm for parameter learning in generic deep generative models that builds upon the predictive coding (PC) framework of computational neuroscience. Our approach modifies the standard PC algorithm to bring performance on-par and exceeding that obtained from standard variational auto-encoder (VAE) training. By injecting Gaussian noise into the PC inference procedure we re-envision it as an overdamped Langevin sampling, which facilitates optimisation with respect to a tight evidence lower bound (ELBO). We improve the resultant encoder-free training method by incorporating an encoder network to provide an amortised warm-start to our Langevin sampling and test three different objectives for doing so. Finally, to increase robustness to the sampling step size and reduce sensitivity to curvature, we validate a lightweight and easily computable form of preconditioning, inspired by Riemann Manifold Langevin and adaptive optimizers from the SGD literature. We compare against VAEs by training like-for-like generative models using our technique against those trained with standard reparameterisation-trick-based ELBOs. We observe our method out-performs or matches performance across a number of metrics, including sample quality, while converging in a fraction of the number of SGD training iterations.Comment: FID values updated to use a fixed 50,000 samples for all experiments - Jeffrey's divergence now consistently best performing. Dynov2 based metrics removed due to inconsistency of results - and since not industry standard. Multiple beta values tested in Fig 4. Theta LR for VAEs; beta and inf LR for LPC now tuned for results. Figure 5B updated; curves now correspond to results in Table

    Evolution of a Complex Predator-Prey Ecosystem on Large-scale Multi-Agent Deep Reinforcement Learning

    Get PDF
    Simulation of population dynamics is a central research theme in computational biology, which contributes to understanding the interactions between predators and preys. Conventional mathematical tools of this theme, however, are incapable of accounting for several important attributes of such systems, such as the intelligent and adaptive behavior exhibited by individual agents. This unrealistic setting is often insufficient to simulate properties of population dynamics found in the real-world. In this work, we leverage multi-agent deep reinforcement learning, and we propose a new model of large-scale predator-prey ecosystems. Using different variants of our proposed environment, we show that multi-agent simulations can exhibit key real-world dynamical properties. To obtain this behavior, we firstly define a mating mechanism such that existing agents reproduce new individuals bound by the conditions of the environment. Furthermore, we incorporate a real-time evolutionary algorithm and show that reinforcement learning enhances the evolution of the agents' physical properties such as speed, attack and resilience against attacks.Comment: 9 pages, 13 figure

    When in Doubt, Think Slow: Iterative Reasoning with Latent Imagination

    Full text link
    In an unfamiliar setting, a model-based reinforcement learning agent can be limited by the accuracy of its world model. In this work, we present a novel, training-free approach to improving the performance of such agents separately from planning and learning. We do so by applying iterative inference at decision-time, to fine-tune the inferred agent states based on the coherence of future state representations. Our approach achieves a consistent improvement in both reconstruction accuracy and task performance when applied to visual 3D navigation tasks. We go on to show that considering more future states further improves the performance of the agent in partially-observable environments, but not in a fully-observable one. Finally, we demonstrate that agents with less training pre-evaluation benefit most from our approach

    Perceptual content, not physiological signals, determines perceived duration when viewing dynamic, natural scenes

    Get PDF
    The neural basis of time perception remains unknown. A prominent account is the pacemaker-accumulator model, wherein regular ticks of some physiological or neural pacemaker are read out as time. Putative candidates for the pacemaker have been suggested in physiological processes (heartbeat), or dopaminergic mid-brain neurons, whose activity has been associated with spontaneous blinking. However, such proposals have difficulty accounting for observations that time perception varies systematically with perceptual content. We examined physiological influences on human duration estimates for naturalistic videos between 1-64 seconds using cardiac and eye recordings. Duration estimates were biased by the amount of change in scene content. Contrary to previous claims, heart rate, and blinking were not related to duration estimates. Our results support a recent proposal that tracking change in perceptual classification networks provides a basis for human time perception, and suggest that previous assertions of the importance of physiological factors should be tempered

    Translating a Typing-Based Adaptive Learning Model to Speech-Based L2 Vocabulary Learning

    Get PDF
    Memorising vocabulary is an important aspect of formal foreign language learning. Advances in cognitive psychology have led to the development of adaptive learning systems that make vocabulary learning more efficient. These computer-based systems measure learning performance in real time to create optimal study strategies for individual learners. While such adaptive learning systems have been successfully applied to written word learning, they have thus far seen little application in spoken word learning. Here we present a system for adaptive, speech-based word learning. We show that it is possible to improve the efficiency of speech-based learning systems by applying a modified adaptive model that was originally developed for typing-based word learning. This finding contributes to a better understanding of the memory processes involved in speech-based word learning. Furthermore, our work provides a basis for the development of language learning applications that use real-time pronunciation assessment software to score the accuracy of the learner’s pronunciations. Speech-based learning applications are educationally relevant because they focus on what may be the most important aspect of language learning: to practice speech

    Benefits of Adaptive Learning Transfer From Typing-Based Learning to Speech-Based Learning

    Get PDF
    Memorising vocabulary is an important aspect of formal foreign-language learning. Advances in cognitive psychology have led to the development of adaptive learning systems that make vocabulary learning more efficient. One way these computer-based systems optimize learning is by measuring learning performance in real time to create optimal repetition schedules for individual learners. While such adaptive learning systems have been successfully applied to word learning using keyboard-based input, they have thus far seen little application in word learning where spoken instead of typed input is used. Here we present a framework for speech-based word learning using an adaptive model that was developed for and tested with typing-based word learning. We show that typing- and speech-based learning result in similar behavioral patterns that can be used to reliably estimate individual memory processes. We extend earlier findings demonstrating that a response-time based adaptive learning approach outperforms an accuracy-based, Leitner flashcard approach in learning efficiency (demonstrated by higher average accuracy and lower response times after a learning session). In short, we show that adaptive learning benefits transfer from typing-based learning, to speech based learning. Our work provides a basis for the development of language learning applications that use real-time pronunciation assessment software to score the accuracy of the learner’s pronunciations. We discuss the implications for our approach for the development of educationally relevant, adaptive speech-based learning applications

    Correction: The role of cortical oscillations in a spiking neural network model of the basal ganglia.

    No full text
    [This corrects the article DOI: 10.1371/journal.pone.0189109.]

    Neuron equations and synaptic input with dopamine.

    No full text
    <p>Neuron equations and synaptic input with dopamine.</p
    corecore