8 research outputs found

    Dynamics Model Abstraction Scheme Using Radial Basis Functions

    Get PDF
    This paper presents a control model for object manipulation. Properties of objects and environmental conditions influence the motor control and learning. System dynamics depend on an unobserved external context, for example, work load of a robot manipulator. The dynamics of a robot arm change as it manipulates objects with different physical properties, for example, the mass, shape, or mass distribution. We address active sensing strategies to acquire object dynamical models with a radial basis function neural network (RBF). Experiments are done using a real robot's arm, and trajectory data are gathered during various trials manipulating different objects. Biped robots do not have high force joint servos and the control system hardly compensates all the inertia variation of the adjacent joints and disturbance torque on dynamic gait control. In order to achieve smoother control and lead to more reliable sensorimotor complexes, we evaluate and compare a sparse velocity-driven versus a dense position-driven control scheme

    Fast convergence of learning requires plasticity between inferior olive and deep cerebellar nuclei in a manipulation task: a closed-loop robotic simulation

    Get PDF
    The cerebellum is known to play a critical role in learning relevant patterns of activity for adaptive motor control, but the underlying network mechanisms are only partly understood. The classical long-term synaptic plasticity between parallel fibers (PFs) and Purkinje cells (PCs), which is driven by the inferior olive (IO), can only account for limited aspects of learning. Recently, the role of additional forms of plasticity in the granular layer, molecular layer and deep cerebellar nuclei (DCN) has been considered. In particular, learning at DCN synapses allows for generalization, but convergence to a stable state requires hundreds of repetitions. In this paper we have explored the putative role of the IO-DCN connection by endowing it with adaptable weights and exploring its implications in a closed-loop robotic manipulation task. Our results show that IO-DCN plasticity accelerates convergence of learning by up to two orders of magnitude without conflicting with the generalization properties conferred by DCN plasticity. Thus, this model suggests that multiple distributed learning mechanisms provide a key for explaining the complex properties of procedural learning and open up new experimental questions for synaptic plasticity in the cerebellar network.This work was supported by grants from the European Union, Egidio D'Angelo and Eduardo Ros (CEREBNET FP7-ITN238686, REALNET FP7-ICT270434) and by grants from the Italian Ministry of Health to Egidio D'Angelo (RF-2009-1475845) and the Spanish Regional Government, Niceto R. Luque (PYR-2014-16). We thank G. Ferrari and M. Rossin for their technical support

    Spiking Neural Network With Distributed Plasticity Reproduces Cerebellar Learning in Eye Blink Conditioning Paradigms

    Get PDF
    In this study, we defined a realistic cerebellar model through the use of artificial spiking neural networks, testing it in computational simulations that reproduce associative motor tasks in multiple sessions of acquisition and extinction. Methods: By evolutionary algorithms, we tuned the cerebellar microcircuit to find out the near-optimal plasticity mechanism parameters that better reproduced human-like behavior in eye blink classical conditioning, one of the most extensively studied paradigms related to the cerebellum. We used two models: one with only the cortical plasticity and another including two additional plasticity sites at nuclear level. Results: First, both spiking cerebellar models were able to well reproduce the real human behaviors, in terms of both "timing" and "amplitude", expressing rapid acquisition, stable late acquisition, rapid extinction, and faster reacquisition of an associative motor task. Even though the model with only the cortical plasticity site showed good learning capabilities, the model with distributed plasticity produced faster and more stable acquisition of conditioned responses in the reacquisition phase. This behavior is explained by the effect of the nuclear plasticities, which have slow dynamics and can express memory consolidation and saving. Conclusions: We showed how the spiking dynamics of multiple interactive neural mechanisms implicitly drive multiple essential components of complex learning processes. Significance: This study presents a very advanced computational model, developed together by biomedical engineers, computer scientists, and neuroscientists. Since its realistic features, the proposed model can provide confirmations and suggestions about neurophysiological and pathological hypotheses and can be used in challenging clinical application

    Event- and Time-Driven Techniques Using Parallel CPU-GPU Co-processing for Spiking Neural Networks

    Get PDF
    The Supplementary Material for this article can be found online at: http://journal.frontiersin.org/article/10.3389/fninf. 2017.00007/full#supplementary-materialModeling and simulating the neural structures which make up our central neural system is instrumental for deciphering the computational neural cues beneath. Higher levels of biological plausibility usually impose higher levels of complexity in mathematical modeling, from neural to behavioral levels. This paper focuses on overcoming the simulation problems (accuracy and performance) derived from using higher levels of mathematical complexity at a neural level. This study proposes different techniques for simulating neural models that hold incremental levels of mathematical complexity: leaky integrate-and-fire (LIF), adaptive exponential integrate-and-fire (AdEx), and Hodgkin-Huxley (HH) neural models (ranged from low to high neural complexity). The studied techniques are classified into two main families depending on how the neural-model dynamic evaluation is computed: the event-driven or the time-driven families. Whilst event-driven techniques pre-compile and store the neural dynamics within look-up tables, time-driven techniques compute the neural dynamics iteratively during the simulation time. We propose two modifications for the event-driven family: a look-up table recombination to better cope with the incremental neural complexity together with a better handling of the synchronous input activity. Regarding the time-driven family, we propose a modification in computing the neural dynamics: the bi-fixed-step integration method. This method automatically adjusts the simulation step size to better cope with the stiffness of the neural model dynamics running in CPU platforms. One version of this method is also implemented for hybrid CPU-GPU platforms. Finally, we analyze how the performance and accuracy of these modifications evolve with increasing levels of neural complexity. We also demonstrate how the proposed modifications which constitute the main contribution of this study systematically outperform the traditional event- and time-driven techniques under increasing levels of neural complexity.This study was supported by the European Union NR (658479-Spike Control), the Spanish National Grant NEUROPACT (TIN2013-47069-P) and by the Spanish National Grant PhD scholarship (AP2012-0906). We gratefully acknowledge the support of NVIDIA Corporation with the donation of two Titan GPUs for current EDLUT development

    Integrated plasticity at inhibitory and excitatory synapses in the cerebellar circuit

    Get PDF
    The way long-term potentiation (LTP) and depression (LTD) are integrated within the different synapses of brain neuronal circuits is poorly understood. In order to progress beyond the identification of specific molecular mechanisms, a system in which multiple forms of plasticity can be correlated with large-scale neural processing is required. In this paper we take as an example the cerebellar network, in which extensive investigations have revealed LTP and LTD at several excitatory and inhibitory synapses. Cerebellar LTP and LTD occur in all three main cerebellar subcircuits (granular layer, molecular layer, deep cerebellar nuclei) and correspondingly regulate the function of their three main neurons: granule cells (GrCs), Purkinje cells (PCs) and deep cerebellar nuclear (DCN) cells. All these neurons, in addition to be excited, are reached by feed-forward and feed-back inhibitory connections, in which LTP and LTD may either operate synergistically or homeostatically in order to control information flow through the circuit. Although the investigation of individual synaptic plasticities in vitro is essential to prove their existence and mechanisms, it is insufficient to generate a coherent view of their impact on network functioning in vivo. Recent computational models and cell-specific genetic mutations in mice are shedding light on how plasticity at multiple excitatory and inhibitory synapses might regulate neuronal activities in the cerebellar circuit and contribute to learning and memory and behavioral control.This work was supported by European Union grants to ED [CEREBNETFP7-ITN238686, REAL NET FP7-ICT270434, Human Brain Project(HBP-604102)] and by Centro Fermi grant [13(14)] to LM

    A Metric for Evaluating Neural Input Representation in Supervised Learning Networks

    Get PDF
    Supervised learning has long been attributed to several feed-forward neural circuits within the brain, with particular attention being paid to the cerebellar granular layer. The focus of this study is to evaluate the input activity representation of these feed-forward neural networks. The activity of cerebellar granule cells is conveyed by parallel fibers and translated into Purkinje cell activity, which constitutes the sole output of the cerebellar cortex. The learning process at this parallel-fiber-to-Purkinje-cell connection makes each Purkinje cell sensitive to a set of specific cerebellar states, which are roughly determined by the granule-cell activity during a certain time window. A Purkinje cell becomes sensitive to each neural input state and, consequently, the network operates as a function able to generate a desired output for each provided input by means of supervised learning. However, not all sets of Purkinje cell responses can be assigned to any set of input states due to the network's own limitations (inherent to the network neurobiological substrate), that is, not all input-output mapping can be learned. A key limiting factor is the representation of the input states through granule-cell activity. The quality of this representation (e.g., in terms of heterogeneity) will determine the capacity of the network to learn a varied set of outputs. Assessing the quality of this representation is interesting when developing and studying models of these networks to identify those neuron or network characteristics that enhance this representation. In this study we present an algorithm for evaluating quantitatively the level of compatibility/interference amongst a set of given cerebellar states according to their representation (granule-cell activation patterns) without the need for actually conducting simulations and network training. The algorithm input consists of a real-number matrix that codifies the activity level of every considered granule-cell in each state. The capability of this representation to generate a varied set of outputs is evaluated geometrically, thus resulting in a real number that assesses the goodness of the representation
    corecore