552 research outputs found

    Feedback control by online learning an inverse model

    Get PDF
    A model, predictor, or error estimator is often used by a feedback controller to control a plant. Creating such a model is difficult when the plant exhibits nonlinear behavior. In this paper, a novel online learning control framework is proposed that does not require explicit knowledge about the plant. This framework uses two learning modules, one for creating an inverse model, and the other for actually controlling the plant. Except for their inputs, they are identical. The inverse model learns by the exploration performed by the not yet fully trained controller, while the actual controller is based on the currently learned model. The proposed framework allows fast online learning of an accurate controller. The controller can be applied on a broad range of tasks with different dynamic characteristics. We validate this claim by applying our control framework on several control tasks: 1) the heating tank problem (slow nonlinear dynamics); 2) flight pitch control (slow linear dynamics); and 3) the balancing problem of a double inverted pendulum (fast linear and nonlinear dynamics). The results of these experiments show that fast learning and accurate control can be achieved. Furthermore, a comparison is made with some classical control approaches, and observations concerning convergence and stability are made

    Memristors for the Curious Outsiders

    Full text link
    We present both an overview and a perspective of recent experimental advances and proposed new approaches to performing computation using memristors. A memristor is a 2-terminal passive component with a dynamic resistance depending on an internal parameter. We provide an brief historical introduction, as well as an overview over the physical mechanism that lead to memristive behavior. This review is meant to guide nonpractitioners in the field of memristive circuits and their connection to machine learning and neural computation.Comment: Perpective paper for MDPI Technologies; 43 page

    Back-engineering of spiking neural networks parameters

    Get PDF
    We consider the deterministic evolution of a time-discretized spiking network of neurons with connection weights having delays, modeled as a discretized neural network of the generalized integrate and fire (gIF) type. The purpose is to study a class of algorithmic methods allowing to calculate the proper parameters to reproduce exactly a given spike train generated by an hidden (unknown) neural network. This standard problem is known as NP-hard when delays are to be calculated. We propose here a reformulation, now expressed as a Linear-Programming (LP) problem, thus allowing to provide an efficient resolution. This allows us to "back-engineer" a neural network, i.e. to find out, given a set of initial conditions, which parameters (i.e., connection weights in this case), allow to simulate the network spike dynamics. More precisely we make explicit the fact that the back-engineering of a spike train, is a Linear (L) problem if the membrane potentials are observed and a LP problem if only spike times are observed, with a gIF model. Numerical robustness is discussed. We also explain how it is the use of a generalized IF neuron model instead of a leaky IF model that allows us to derive this algorithm. Furthermore, we point out how the L or LP adjustment mechanism is local to each unit and has the same structure as an "Hebbian" rule. A step further, this paradigm is easily generalizable to the design of input-output spike train transformations. This means that we have a practical method to "program" a spiking network, i.e. find a set of parameters allowing us to exactly reproduce the network output, given an input. Numerical verifications and illustrations are provided.Comment: 30 pages, 17 figures, submitte

    Theory and Practice of Computing with Excitable Dynamics

    Get PDF
    Reservoir computing (RC) is a promising paradigm for time series processing. In this paradigm, the desired output is computed by combining measurements of an excitable system that responds to time-dependent exogenous stimuli. The excitable system is called a reservoir and measurements of its state are combined using a readout layer to produce a target output. The power of RC is attributed to an emergent short-term memory in dynamical systems and has been analyzed mathematically for both linear and nonlinear dynamical systems. The theory of RC treats only the macroscopic properties of the reservoir, without reference to the underlying medium it is made of. As a result, RC is particularly attractive for building computational devices using emerging technologies whose structure is not exactly controllable, such as self-assembled nanoscale circuits. RC has lacked a formal framework for performance analysis and prediction that goes beyond memory properties. To provide such a framework, here a mathematical theory of memory and information processing in ordered and disordered linear dynamical systems is developed. This theory analyzes the optimal readout layer for a given task. The focus of the theory is a standard model of RC, the echo state network (ESN). An ESN consists of a fixed recurrent neural network that is driven by an external signal. The dynamics of the network is then combined linearly with readout weights to produce the desired output. The readout weights are calculated using linear regression. Using an analysis of regression equations, the readout weights can be calculated using only the statistical properties of the reservoir dynamics, the input signal, and the desired output. The readout layer weights can be calculated from a priori knowledge of the desired function to be computed and the weight matrix of the reservoir. This formulation explicitly depends on the input weights, the reservoir weights, and the statistics of the target function. This formulation is used to bound the expected error of the system for a given target function. The effects of input-output correlation and complex network structure in the reservoir on the computational performance of the system have been mathematically characterized. Far from the chaotic regime, ordered linear networks exhibit a homogeneous decay of memory in different dimensions, which keeps the input history coherent. As disorder is introduced in the structure of the network, memory decay becomes inhomogeneous along different dimensions causing decoherence in the input history, and degradation in task-solving performance. Close to the chaotic regime, the ordered systems show loss of temporal information in the input history, and therefore inability to solve tasks. However, by introducing disorder and therefore heterogeneous decay of memory the temporal information of input history is preserved and the task-solving performance is recovered. Thus for systems at the edge of chaos, disordered structure may enhance temporal information processing. Although the current framework only applies to linear systems, in principle it can be used to describe the properties of physical reservoir computing, e.g., photonic RC using short coherence-length light

    Design of Robust Memristor-Based Neuromorphic Circuits and Systems with Online Learning

    Get PDF
    Computing systems that are capable of performing human-like cognitive tasks have been an area of active research in the recent past. However, due to the bottleneck faced by the traditionally adopted von Neumann computing architecture, bio-inspired neural network style computing paradigm has seen a spike in research interest. Physical implementations of this paradigm of computing are known as neuromorphic systems. In the recent years, in the domain of neuromorphic systems, memristor based neuromorphic systems have gained increased attention from the research community due to the advantages offered by memristors such as their nanoscale size, nonvolatile nature and power efficient programming capability. However, these devices also suffer from a variety of non-ideal behaviors such as switching speed and threshold asymmetry, limited resolution and endurance that can have a detrimental impact on the operation of the systems employing these devices. This work aims to develop device-aware circuits that are robust in the face of such non-ideal properties. A bi-memristor synapse is first presented whose spike-timing-dependent plasticity (STDP) behavior can be precisely controlled on-chip and hence is shown to be robust. Later, a mixed-mode neuron is introduced that is amenable for use in conjunction with a range of memristors without needing to custom design it. These circuits are then used together to construct a memristive crossbar based system with supervised STDP learning to perform a pattern recognition application. The learning in the crossbar system is shown to be robust to the device-level issues owing to the robustness of the proposed circuits. Lastly, the proposed circuits are applied to build a liquid state machine based reservoir computing system. The reservoir used here is a spiking recurrent neural network generated using an evolutionary optimization algorithm and the readout layer is built with the crossbar system presented earlier, with STDP based online learning. A generalized framework for the hardware implementation of this system is proposed and it is shown that this liquid state machine is robust against device-level switching issues that would have otherwise impacted learning in the readout layer. Thereby, it is demonstrated that the proposed circuits along with their learning techniques can be used to build robust memristor-based neuromorphic systems with online learning

    Theoretical simulations of dynamical systems for advanced reservoir computing applications

    Get PDF
    There are computational problems that are simply too complex and cannot be handled by traditional CMOS technologies due to practical engineering limitations related to either fundamental physical behavior of devices at small scales, or various energy consumption issues. The field of unconventional computation has emerged as a response to these challenges. Up to date unconventional computation encompasses a plethora of computing frameworks, such as neuromorphic computing, molecular computing, reaction-diffusion computing, or quantum computing, and is ever increasing in its scope. This thesis is biased towards developing sensing applications in the unconventional computing context. This initiative is further extended towards developing novel machine learning applications. The possibility of building intelligent dynamical systems that collect information and analyze it in real-time has been investigated theoretically.The basic idea is to expose a dynamical system to the environment one wishes to analyze over time. The system operates as an environment sensitive reservoir computer. Since the state of the reservoir depends on the environment, the information about the environment one wishes to retrieve gets encoded in the state of the system. The key idea exploited in the thesis is that if the state of the reservoir is highly correlated with the state of the environment\ua0 then the information about the environment can be inferred with a modest engineering overhead. A typical dynamical system is assumed to be a network of environment sensitive elements. Each element can be something simple, but taken together, the elements acquire collective intelligence that can be harvested. These ideas have been examined theoretically (and verified experimentally) by simulating various networks of environment-sensitive elements: the memristor, the capacitor, the constant phase element and the organic field effect transistor element. The simulations were done in the context of ion sensing, which is an extremely complex, many-body, and multi-scale modeling problem
    • …
    corecore