85 research outputs found

    Robust learning algorithms for spiking and rate-based neural networks

    Get PDF
    Inspired by the remarkable properties of the human brain, the fields of machine learning, computational neuroscience and neuromorphic engineering have achieved significant synergistic progress in the last decade. Powerful neural network models rooted in machine learning have been proposed as models for neuroscience and for applications in neuromorphic engineering. However, the aspect of robustness is often neglected in these models. Both biological and engineered substrates show diverse imperfections that deteriorate the performance of computation models or even prohibit their implementation. This thesis describes three projects aiming at implementing robust learning with local plasticity rules in neural networks. First, we demonstrate the advantages of neuromorphic computations in a pilot study on a prototype chip. Thereby, we quantify the speed and energy consumption of the system compared to a software simulation and show how on-chip learning contributes to the robustness of learning. Second, we present an implementation of spike-based Bayesian inference on accelerated neuromorphic hardware. The model copes, via learning, with the disruptive effects of the imperfect substrate and benefits from the acceleration. Finally, we present a robust model of deep reinforcement learning using local learning rules. It shows how backpropagation combined with neuromodulation could be implemented in a biologically plausible framework. The results contribute to the pursuit of robust and powerful learning networks for biological and neuromorphic substrates

    The Impact of Striatal Neuropeptides and Topography on Action Sequence Selection

    Get PDF
    Many common behaviours are a sequence of several actions. As action sequences are learned their activation often becomes habitual, allowing smooth, rapid, and semi-automatic execution; learning and performing action sequences is central to normal motor function. The striatum is the primary input nucleus for the basal ganglia and receives glutamatergic cortical afferents. These afferents innervate localised populations of medium spiny neurons (MSNs) and may encode 'action requests'. Striatal interactions ensure that only non-conflicting, high salience requests are selected, but the mechanisms enabling clean, rapid switching between sequential actions are poorly understood. Substance P (SP) and enkephalin are neuropeptides co-released with GABA by MSNs preferentially expressing D1 or D2 dopamine receptors respectively. SP facilitates subsequent glutamatergic inputs to target MSNs while enkephalin has an inhibitory effect. We construct models of these glutamatergic effects and integrate them into a basal ganglia model to demonstrate that diffuse neuropeptide connectivity enhances action selection. For action sequences with an ordinal structure, patterning SP connectivity to reflect this ordering enhances the selection of correctly–ordered actions and suppresses disordered selection. We also show that selectively pruning SP connections allows context–sensitive inhibition of specific undesirable requests that otherwise interfere with action group selection. We then construct a striatal microcircuit model with physical topography and show that inputs to this model generate oscillations in MSN spiking. Input salience and active neuronal density have differentiable impacts on oscillation amplitude and frequency, but the presence of oscillations has little effect on the mean MSN firing rate or action selection. Our model suggests that neuropeptide interactions enhance the contrast between selected and rejected action requests, and that patterned SP connectivity enhances the selection of ordered sequences. Our model further suggests that striatal topography does not directly impact action selection, but that evoked oscillations may represent an additional form of population coding that could bind together semantically related MSN groups

    Unveiling the frontiers of deep learning: innovations shaping diverse domains

    Full text link
    Deep learning (DL) enables the development of computer models that are capable of learning, visualizing, optimizing, refining, and predicting data. In recent years, DL has been applied in a range of fields, including audio-visual data processing, agriculture, transportation prediction, natural language, biomedicine, disaster management, bioinformatics, drug design, genomics, face recognition, and ecology. To explore the current state of deep learning, it is necessary to investigate the latest developments and applications of deep learning in these disciplines. However, the literature is lacking in exploring the applications of deep learning in all potential sectors. This paper thus extensively investigates the potential applications of deep learning across all major fields of study as well as the associated benefits and challenges. As evidenced in the literature, DL exhibits accuracy in prediction and analysis, makes it a powerful computational tool, and has the ability to articulate itself and optimize, making it effective in processing data with no prior training. Given its independence from training data, deep learning necessitates massive amounts of data for effective analysis and processing, much like data volume. To handle the challenge of compiling huge amounts of medical, scientific, healthcare, and environmental data for use in deep learning, gated architectures like LSTMs and GRUs can be utilized. For multimodal learning, shared neurons in the neural network for all activities and specialized neurons for particular tasks are necessary.Comment: 64 pages, 3 figures, 3 table

    Temporal integration of loudness as a function of level

    Get PDF

    The role of prefrontal cortex and basal ganglia in model-based and model-free reinforcement learning

    Get PDF
    Contemporary reinforcement learning (RL) theory suggests that choices can be evaluated either by the model-free (MF) strategy of learning their past worth or the model-based (MB) strategy of predicting their likely consequences based on learning how decision states eventually transition to outcomes. Statistical and computational considerations argue that these strategies should ideally be combined. This thesis aimed to investigate the neural implementation of these two RL strategies and the mechanisms of their interactions. Two non-human primates performed a two-stage decision task designed to elicit and discriminate the use of both MF and MB-RL, while single-neuron activity was recorded from the prefrontal cortex (frontal pole, FP; anterior cingulate cortex, ACC; dorsolateral prefrontal cortex) and striatum (caudate and putamen). Logistic regression analysis revealed that the structure of the task (of MB importance) and the reward history (of MF and MB importance) significantly influenced choice. A trial-by-trial computational analysis also confirmed that choices were made according to a weighted combination of MF and MB- RL, with the influence of the latter approaching 90%. Furthermore, the valuations of both learning methods also influenced response vigour and pupil response. Neural correlates of key elements for MF and MB learning were observed across all brain areas, but functional segregation was also in evidence. Neurons in ACC encoded features of both MF and MB, suggesting a possible role in the arbitration between both strategies. Striatal activity was consistent with a role in value updating by encoding reward prediction errors. Finally, novel neurophysiological evidence was found in favour of the role of the FP in counterfactual processing. In conclusion, this thesis provides insight into the neural implementation of MF and MB-RL computations and their various effects on diverse aspects of behaviour. It supports the parallel operation and integration of the two approaches, while revealing unexpected intricacies

    Temporal integration of loudness as a function of level

    Full text link
    • 

    corecore