462 research outputs found

    Synchronization and Redundancy: Implications for Robustness of Neural Learning and Decision Making

    Full text link
    Learning and decision making in the brain are key processes critical to survival, and yet are processes implemented by non-ideal biological building blocks which can impose significant error. We explore quantitatively how the brain might cope with this inherent source of error by taking advantage of two ubiquitous mechanisms, redundancy and synchronization. In particular we consider a neural process whose goal is to learn a decision function by implementing a nonlinear gradient dynamics. The dynamics, however, are assumed to be corrupted by perturbations modeling the error which might be incurred due to limitations of the biology, intrinsic neuronal noise, and imperfect measurements. We show that error, and the associated uncertainty surrounding a learned solution, can be controlled in large part by trading off synchronization strength among multiple redundant neural systems against the noise amplitude. The impact of the coupling between such redundant systems is quantified by the spectrum of the network Laplacian, and we discuss the role of network topology in synchronization and in reducing the effect of noise. A range of situations in which the mechanisms we model arise in brain science are discussed, and we draw attention to experimental evidence suggesting that cortical circuits capable of implementing the computations of interest here can be found on several scales. Finally, simulations comparing theoretical bounds to the relevant empirical quantities show that the theoretical estimates we derive can be tight.Comment: Preprint, accepted for publication in Neural Computatio

    Computation in Dynamically Bounded Asymmetric Systems

    Get PDF
    Previous explanations of computations performed by recurrent networks have focused on symmetrically connected saturating neurons and their convergence toward attractors. Here we analyze the behavior of asymmetrical connected networks of linear threshold neurons, whose positive response is unbounded. We show that, for a wide range of parameters, this asymmetry brings interesting and computationally useful dynamical properties. When driven by input, the network explores potential solutions through highly unstable ‘expansion’ dynamics. This expansion is steered and constrained by negative divergence of the dynamics, which ensures that the dimensionality of the solution space continues to reduce until an acceptable solution manifold is reached. Then the system contracts stably on this manifold towards its final solution trajectory. The unstable positive feedback and cross inhibition that underlie expansion and divergence are common motifs in molecular and neuronal networks. Therefore we propose that very simple organizational constraints that combine these motifs can lead to spontaneous computation and so to the spontaneous modification of entropy that is characteristic of living systems

    The application of parameter sensitivity analysis methods to inverse simulation models

    Get PDF
    Knowledge of the sensitivity of inverse solutions to variation of parameters of a model can be very useful in making engineering design decisions. This paper describes how parameter sensitivity analysis can be carried out for inverse simulations generated through approximate transfer function inversion methods and also by the use of feedback principles. Emphasis is placed on the use of sensitivity models and the paper includes examples and a case study involving a model of an underwater vehicle. It is shown that the use of sensitivity models can provide physical understanding of inverse simulation solutions that is not directly available using parameter sensitivity analysis methods that involve parameter perturbations and response differencing

    On the simulation of nonlinear bidimensional spiking neuron models

    Full text link
    Bidimensional spiking models currently gather a lot of attention for their simplicity and their ability to reproduce various spiking patterns of cortical neurons, and are particularly used for large network simulations. These models describe the dynamics of the membrane potential by a nonlinear differential equation that blows up in finite time, coupled to a second equation for adaptation. Spikes are emitted when the membrane potential blows up or reaches a cutoff value. The precise simulation of the spike times and of the adaptation variable is critical for it governs the spike pattern produced, and is hard to compute accurately because of the exploding nature of the system at the spike times. We thoroughly study the precision of fixed time-step integration schemes for this type of models and demonstrate that these methods produce systematic errors that are unbounded, as the cutoff value is increased, in the evaluation of the two crucial quantities: the spike time and the value of the adaptation variable at this time. Precise evaluation of these quantities therefore involve very small time steps and long simulation times. In order to achieve a fixed absolute precision in a reasonable computational time, we propose here a new algorithm to simulate these systems based on a variable integration step method that either integrates the original ordinary differential equation or the equation of the orbits in the phase plane, and compare this algorithm with fixed time-step Euler scheme and other more accurate simulation algorithms

    Global synchronization for delayed complex networks with randomly occurring nonlinearities and multiple stochastic disturbances

    Get PDF
    This is the post print version of the article. The official published version can be obained from the link - Copyright 2009 IOP Publishing LtdThis paper is concerned with the synchronization problem for a new class of continuous time delayed complex networks with stochastic nonlinearities (randomly occurring nonlinearities), interval time-varying delays, unbounded distributed delays as well as multiple stochastic disturbances. The stochastic nonlinearities and multiple stochastic disturbances are investigated here in order to reflect more realistic dynamical behaviors of the complex networks that are affected by the noisy environment. By utilizing a new matrix functional with the idea of partitioning the lower bound h1 of the time-varying delay, we employ the stochastic analysis techniques and the properties of the Kronecker product to establish delay-dependent synchronization criteria that ensure the globally asymptotically mean-square synchronization of the addressed stochastic delayed complex networks. The sufficient conditions obtained are in the form of linear matrix inequalities (LMIs) whose solutions can be readily solved by using the standard numerical software. A numerical example is exploited to show the applicability of the proposed results.This work was supported in part by the Engineering and Physical Sciences Research Council (EPSRC) of the UK under Grant GR/S27658/01, an International Joint Project sponsored by the Royal Society of the UK, the National 973 Program of China under Grant 2009CB320600, the National Natural Science Foundation of China under Grant 60804028, the Specialized Research Fund for the Doctoral Program of Higher Education for New Teachers under Grant 200802861044, the Teaching and Research Fund for Excellent Young Teachers at Southeast University of China, and the Alexander von Humboldt Foundation of Germany

    Vibration Control of a Nonlinear Beam System

    Full text link

    Dynamical Movement Primitives: Learning Attractor Models for Motor Behaviors

    Get PDF
    Nonlinear dynamical systems have been used in many disciplines to model complex behaviors, including biological motor control, robotics, perception, economics, traffic prediction, and neuroscience. While often the unexpected emergent behavior of nonlinear systems is the focus of investigations, it is of equal importance to create goal-directed behavior (e.g., stable locomotion from a system of coupled oscillators under perceptual guidance). Modeling goal-directed behavior with nonlinear systems is, however, rather difficult due to the parameter sensitivity of these systems, their complex phase transitions in response to subtle parameter changes, and the difficulty of analyzing and predicting their long-term behavior; intuition and time-consuming parameter tuning play a major role. This letter presents and reviews dynamical movement primitives, a line of research for modeling attractor behaviors of autonomous nonlinear dynamical systems with the help of statistical learning techniques. The essence of our approach is to start with a simple dynamical system
    corecore