1,419 research outputs found

    Incremental construction of LSTM recurrent neural network

    Get PDF
    Long Short--Term Memory (LSTM) is a recurrent neural network that uses structures called memory blocks to allow the net remember significant events distant in the past input sequence in order to solve long time lag tasks, where other RNN approaches fail. Throughout this work we have performed experiments using LSTM networks extended with growing abilities, which we call GLSTM. Four methods of training growing LSTM has been compared. These methods include cascade and fully connected hidden layers as well as two different levels of freezing previous weights in the cascade case. GLSTM has been applied to a forecasting problem in a biomedical domain, where the input/output behavior of five controllers of the Central Nervous System control has to be modelled. We have compared growing LSTM results against other neural networks approaches, and our work applying conventional LSTM to the task at hand.Postprint (published version

    Machine-learning nonstationary noise out of gravitational-wave detectors

    Get PDF
    Signal extraction out of background noise is a common challenge in high-precision physics experiments, where the measurement output is often a continuous data stream. To improve the signal-to-noise ratio of the detection, witness sensors are often used to independently measure background noises and subtract them from the main signal. If the noise coupling is linear and stationary, optimal techniques already exist and are routinely implemented in many experiments. However, when the noise coupling is nonstationary, linear techniques often fail or are suboptimal. Inspired by the properties of the background noise in gravitational wave detectors, this work develops a novel algorithm to efficiently characterize and remove nonstationary noise couplings, provided there exist witnesses of the noise source and of the modulation. In this work, the algorithm is described in its most general formulation, and its efficiency is demonstrated with examples from the data of the Advanced LIGO gravitational-wave observatory, where we could obtain an improvement of the detector gravitational-wave reach without introducing any bias on the source parameter estimation

    Active disturbance cancellation in nonlinear dynamical systems using neural networks

    Get PDF
    A proposal for the use of a time delay CMAC neural network for disturbance cancellation in nonlinear dynamical systems is presented. Appropriate modifications to the CMAC training algorithm are derived which allow convergent adaptation for a variety of secondary signal paths. Analytical bounds on the maximum learning gain are presented which guarantee convergence of the algorithm and provide insight into the necessary reduction in learning gain as a function of the system parameters. Effectiveness of the algorithm is evaluated through mathematical analysis, simulation studies, and experimental application of the technique on an acoustic duct laboratory model

    Active Noise Feedback Control Using a Neural Network

    Get PDF

    Recursive backpropagation algorithm applied to a globally recurrent neural network

    Full text link
    In general, recursive neural networks can yield a smaller structure than purely feedforward neural network in the same way infinite impulse response (IIR) filters can replace longer finite impulse response (FIR) filters. This thesis presents a new adaptive algorithm that trains recursive neural networks. This algorithm is based on least mean square (LMS) algorithms designed for other adaptive architectures. This algorithm overcomes several of the limitations of current recursive neural network algorithms, such as epoch training and the requirement for large amounts of memory storage; To demonstrate this new algorithm, adaptive architectures constructed with a recursive neural network and trained with the new algorithm are applied to the four adaptive systems and the results are compared to adaptive systems constructed with other adaptive filters. In these examples, this new algorithm shows the ability to perform linear and nonlinear transformations and, in some cases, significantly outperforms the other adaptive filters. This thesis also discusses the possible avenues for future exploration of adaptive systems constructed of recursive neural networks

    Homeostatic plasticity and external input shape neural network dynamics

    Full text link
    In vitro and in vivo spiking activity clearly differ. Whereas networks in vitro develop strong bursts separated by periods of very little spiking activity, in vivo cortical networks show continuous activity. This is puzzling considering that both networks presumably share similar single-neuron dynamics and plasticity rules. We propose that the defining difference between in vitro and in vivo dynamics is the strength of external input. In vitro, networks are virtually isolated, whereas in vivo every brain area receives continuous input. We analyze a model of spiking neurons in which the input strength, mediated by spike rate homeostasis, determines the characteristics of the dynamical state. In more detail, our analytical and numerical results on various network topologies show consistently that under increasing input, homeostatic plasticity generates distinct dynamic states, from bursting, to close-to-critical, reverberating and irregular states. This implies that the dynamic state of a neural network is not fixed but can readily adapt to the input strengths. Indeed, our results match experimental spike recordings in vitro and in vivo: the in vitro bursting behavior is consistent with a state generated by very low network input (< 0.1%), whereas in vivo activity suggests that on the order of 1% recorded spikes are input-driven, resulting in reverberating dynamics. Importantly, this predicts that one can abolish the ubiquitous bursts of in vitro preparations, and instead impose dynamics comparable to in vivo activity by exposing the system to weak long-term stimulation, thereby opening new paths to establish an in vivo-like assay in vitro for basic as well as neurological studies.Comment: 14 pages, 8 figures, accepted at Phys. Rev.

    Recurrent cerebellar architecture solves the motor-error problem

    Get PDF
    Current views of cerebellar function have been heavily influenced by the models of Marr and Albus, who suggested that the climbing fibre input to the cerebellum acts as a teaching signal for motor learning. It is commonly assumed that this teaching signal must be motor error (the difference between actual and correct motor command), but this approach requires complex neural structures to estimate unobservable motor error from its observed sensory consequences. We have proposed elsewhere a recurrent decorrelation control architecture in which Marr-Albus models learn without requiring motor error. Here, we prove convergence for this architecture and demonstrate important advantages for the modular control of systems with multiple degrees of freedom. These results are illustrated by modelling adaptive plant compensation for the three-dimensional vestibular ocular reflex. This provides a functional role for recurrent cerebellar connectivity, which may be a generic anatomical feature of projections between regions of cerebral and cerebellar cortex
    corecore