637 research outputs found
Neural Networks: Training and Application to Nonlinear System Identification and Control
This dissertation investigates training neural networks for system identification and classification. The research contains two main contributions as follow:1. Reducing number of hidden layer nodes using a feedforward componentThis research reduces the number of hidden layer nodes and training time of neural networks to make them more suited to online identification and control applications by adding a parallel feedforward component. Implementing the feedforward component with a wavelet neural network and an echo state network provides good models for nonlinear systems.The wavelet neural network with feedforward component along with model predictive controller can reliably identify and control a seismically isolated structure during earthquake. The network model provides the predictions for model predictive control. Simulations of a 5-story seismically isolated structure with conventional lead-rubber bearings showed significant reductions of all response amplitudes for both near-field (pulse) and far-field ground motions, including reduced deformations along with corresponding reduction in acceleration response. The controller effectively regulated the apparent stiffness at the isolation level. The approach is also applied to the online identification and control of an unmanned vehicle. Lyapunov theory is used to prove the stability of the wavelet neural network and the model predictive controller. 2. Training neural networks using trajectory based optimization approachesTraining neural networks is a nonlinear non-convex optimization problem to determine the weights of the neural network. Traditional training algorithms can be inefficient and can get trapped in local minima. Two global optimization approaches are adapted to train neural networks and avoid the local minima problem. Lyapunov theory is used to prove the stability of the proposed methodology and its convergence in the presence of measurement errors. The first approach transforms the constraint satisfaction problem into unconstrained optimization. The constraints define a quotient gradient system (QGS) whose stable equilibrium points are local minima of the unconstrained optimization. The QGS is integrated to determine local minima and the local minimum with the best generalization performance is chosen as the optimal solution. The second approach uses the QGS together with a projected gradient system (PGS). The PGS is a nonlinear dynamical system, defined based on the optimization problem that searches the components of the feasible region for solutions. Lyapunov theory is used to prove the stability of PGS and QGS and their stability under presence of measurement noise
Implicit regularization and momentum algorithms in nonlinear adaptive control and prediction
Stable concurrent learning and control of dynamical systems is the subject of
adaptive control. Despite being an established field with many practical
applications and a rich theory, much of the development in adaptive control for
nonlinear systems revolves around a few key algorithms. By exploiting strong
connections between classical adaptive nonlinear control techniques and recent
progress in optimization and machine learning, we show that there exists
considerable untapped potential in algorithm development for both adaptive
nonlinear control and adaptive dynamics prediction. We first introduce
first-order adaptation laws inspired by natural gradient descent and mirror
descent. We prove that when there are multiple dynamics consistent with the
data, these non-Euclidean adaptation laws implicitly regularize the learned
model. Local geometry imposed during learning thus may be used to select
parameter vectors - out of the many that will achieve perfect tracking or
prediction - for desired properties such as sparsity. We apply this result to
regularized dynamics predictor and observer design, and as concrete examples
consider Hamiltonian systems, Lagrangian systems, and recurrent neural
networks. We subsequently develop a variational formalism based on the Bregman
Lagrangian to define adaptation laws with momentum applicable to linearly
parameterized systems and to nonlinearly parameterized systems satisfying
monotonicity or convexity requirements. We show that the Euler Lagrange
equations for the Bregman Lagrangian lead to natural gradient and mirror
descent-like adaptation laws with momentum, and we recover their first-order
analogues in the infinite friction limit. We illustrate our analyses with
simulations demonstrating our theoretical results.Comment: v6: cosmetic adjustments to figures 4, 5, and 6. v5: final version,
accepted for publication in Neural Computation. v4: significant updates,
revamped section on dynamics prediction and exploiting structure. v3: new
general theorems and extensions to dynamic prediction. 37 pages, 3 figures.
v2: significant updates; submission read
Reservoir Computing: computation with dynamical systems
In het onderzoeksgebied Machine Learning worden systemen onderzocht die kunnen leren op basis van voorbeelden. Binnen dit onderzoeksgebied zijn de recurrente neurale netwerken een belangrijke deelgroep. Deze netwerken zijn abstracte modellen van de werking van delen van de hersenen. Zij zijn in staat om zeer complexe temporele problemen op te lossen maar zijn over het algemeen zeer moeilijk om te trainen. Recentelijk zijn een aantal gelijkaardige methodes voorgesteld die dit trainingsprobleem elimineren. Deze methodes worden aangeduid met de naam Reservoir Computing. Reservoir Computing combineert de indrukwekkende rekenkracht van recurrente neurale netwerken met een eenvoudige trainingsmethode. Bovendien blijkt dat deze trainingsmethoden niet beperkt zijn tot neurale netwerken, maar kunnen toegepast worden op generieke dynamische systemen. Waarom deze systemen goed werken en welke eigenschappen bepalend zijn voor de prestatie is evenwel nog niet duidelijk.
Voor dit proefschrift is onderzoek gedaan naar de dynamische eigenschappen van generieke Reservoir Computing systemen. Zo is experimenteel aangetoond dat de idee van Reservoir Computing ook toepasbaar is op niet-neurale netwerken van dynamische knopen. Verder is een maat voorgesteld die gebruikt kan worden om het dynamisch regime van een reservoir te meten. Tenslotte is een adaptatieregel geïntroduceerd die voor een breed scala reservoirtypes de dynamica van het reservoir kan afregelen tot het gewenste dynamisch regime. De technieken beschreven in dit proefschrift zijn gedemonstreerd op verschillende academische en ingenieurstoepassingen
- …