82,381 research outputs found

    How to Learn and Generalize From Three Minutes of Data: Physics-Constrained and Uncertainty-Aware Neural Stochastic Differential Equations

    Full text link
    We present a framework and algorithms to learn controlled dynamics models using neural stochastic differential equations (SDEs) -- SDEs whose drift and diffusion terms are both parametrized by neural networks. We construct the drift term to leverage a priori physics knowledge as inductive bias, and we design the diffusion term to represent a distance-aware estimate of the uncertainty in the learned model's predictions -- it matches the system's underlying stochasticity when evaluated on states near those from the training dataset, and it predicts highly stochastic dynamics when evaluated on states beyond the training regime. The proposed neural SDEs can be evaluated quickly enough for use in model predictive control algorithms, or they can be used as simulators for model-based reinforcement learning. Furthermore, they make accurate predictions over long time horizons, even when trained on small datasets that cover limited regions of the state space. We demonstrate these capabilities through experiments on simulated robotic systems, as well as by using them to model and control a hexacopter's flight dynamics: A neural SDE trained using only three minutes of manually collected flight data results in a model-based control policy that accurately tracks aggressive trajectories that push the hexacopter's velocity and Euler angles to nearly double the maximum values observed in the training dataset.Comment: Final submission to CoRL 202

    State estimation for coupled uncertain stochastic networks with missing measurements and time-varying delays: The discrete-time case

    Get PDF
    Copyright [2009] IEEE. This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Brunel University's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected]. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.This paper is concerned with the problem of state estimation for a class of discrete-time coupled uncertain stochastic complex networks with missing measurements and time-varying delay. The parameter uncertainties are assumed to be norm-bounded and enter into both the network state and the network output. The stochastic Brownian motions affect not only the coupling term of the network but also the overall network dynamics. The nonlinear terms that satisfy the usual Lipschitz conditions exist in both the state and measurement equations. Through available output measurements described by a binary switching sequence that obeys a conditional probability distribution, we aim to design a state estimator to estimate the network states such that, for all admissible parameter uncertainties and time-varying delays, the dynamics of the estimation error is guaranteed to be globally exponentially stable in the mean square. By employing the Lyapunov functional method combined with the stochastic analysis approach, several delay-dependent criteria are established that ensure the existence of the desired estimator gains, and then the explicit expression of such estimator gains is characterized in terms of the solution to certain linear matrix inequalities (LMIs). Two numerical examples are exploited to illustrate the effectiveness of the proposed estimator design schemes

    Spiking Neural Networks for Inference and Learning: A Memristor-based Design Perspective

    Get PDF
    On metrics of density and power efficiency, neuromorphic technologies have the potential to surpass mainstream computing technologies in tasks where real-time functionality, adaptability, and autonomy are essential. While algorithmic advances in neuromorphic computing are proceeding successfully, the potential of memristors to improve neuromorphic computing have not yet born fruit, primarily because they are often used as a drop-in replacement to conventional memory. However, interdisciplinary approaches anchored in machine learning theory suggest that multifactor plasticity rules matching neural and synaptic dynamics to the device capabilities can take better advantage of memristor dynamics and its stochasticity. Furthermore, such plasticity rules generally show much higher performance than that of classical Spike Time Dependent Plasticity (STDP) rules. This chapter reviews the recent development in learning with spiking neural network models and their possible implementation with memristor-based hardware

    Channel noise effects on neural synchronization

    Get PDF
    Synchronization in neural networks is strongly tied to the implementation of cognitive processes, but abnormal neuronal synchronization has been linked to a number of brain disorders such as epilepsy and schizophrenia. Here we examine the effects of channel noise on the synchronization of small Hodgkin-Huxley neuronal networks. The principal feature of a Hodgkin-Huxley neuron is the existence of protein channels that transition between open and closed states with voltage dependent rate constants. The Hodgkin-Huxley model assumes infinitely many channels, so fluctuations in the number of open channels do not affect the voltage. However, real neurons have finitely many channels which lead to fluctuations in the membrane voltage and modify the timing of the spikes, which may in turn lead to large changes in the degree of synchronization. We demonstrate that under mild conditions, neurons in the network reach a steady state synchronization level that depends only on the number of neurons in the network. The channel noise only affects the time it takes to reach the steady state synchronization level.Comment: 7 Figure

    Deterministic networks for probabilistic computing

    Get PDF
    Neural-network models of high-level brain functions such as memory recall and reasoning often rely on the presence of stochasticity. The majority of these models assumes that each neuron in the functional network is equipped with its own private source of randomness, often in the form of uncorrelated external noise. However, both in vivo and in silico, the number of noise sources is limited due to space and bandwidth constraints. Hence, neurons in large networks usually need to share noise sources. Here, we show that the resulting shared-noise correlations can significantly impair the performance of stochastic network models. We demonstrate that this problem can be overcome by using deterministic recurrent neural networks as sources of uncorrelated noise, exploiting the decorrelating effect of inhibitory feedback. Consequently, even a single recurrent network of a few hundred neurons can serve as a natural noise source for large ensembles of functional networks, each comprising thousands of units. We successfully apply the proposed framework to a diverse set of binary-unit networks with different dimensionalities and entropies, as well as to a network reproducing handwritten digits with distinct predefined frequencies. Finally, we show that the same design transfers to functional networks of spiking neurons.Comment: 22 pages, 11 figure
    corecore