381 research outputs found
Implicit regularization and momentum algorithms in nonlinear adaptive control and prediction
Stable concurrent learning and control of dynamical systems is the subject of
adaptive control. Despite being an established field with many practical
applications and a rich theory, much of the development in adaptive control for
nonlinear systems revolves around a few key algorithms. By exploiting strong
connections between classical adaptive nonlinear control techniques and recent
progress in optimization and machine learning, we show that there exists
considerable untapped potential in algorithm development for both adaptive
nonlinear control and adaptive dynamics prediction. We first introduce
first-order adaptation laws inspired by natural gradient descent and mirror
descent. We prove that when there are multiple dynamics consistent with the
data, these non-Euclidean adaptation laws implicitly regularize the learned
model. Local geometry imposed during learning thus may be used to select
parameter vectors - out of the many that will achieve perfect tracking or
prediction - for desired properties such as sparsity. We apply this result to
regularized dynamics predictor and observer design, and as concrete examples
consider Hamiltonian systems, Lagrangian systems, and recurrent neural
networks. We subsequently develop a variational formalism based on the Bregman
Lagrangian to define adaptation laws with momentum applicable to linearly
parameterized systems and to nonlinearly parameterized systems satisfying
monotonicity or convexity requirements. We show that the Euler Lagrange
equations for the Bregman Lagrangian lead to natural gradient and mirror
descent-like adaptation laws with momentum, and we recover their first-order
analogues in the infinite friction limit. We illustrate our analyses with
simulations demonstrating our theoretical results.Comment: v6: cosmetic adjustments to figures 4, 5, and 6. v5: final version,
accepted for publication in Neural Computation. v4: significant updates,
revamped section on dynamics prediction and exploiting structure. v3: new
general theorems and extensions to dynamic prediction. 37 pages, 3 figures.
v2: significant updates; submission read
Nonlinear Model-Based Control for Neuromuscular Electrical Stimulation
Neuromuscular electrical stimulation (NMES) is a technology where skeletal muscles are externally stimulated by electrodes to help restore functionality to human limbs with motor neuron disorder. This dissertation is concerned with the model-based feedback control of the NMES quadriceps muscle group-knee joint dynamics. A class of nonlinear controllers is presented based on various levels of model structures and uncertainties. The two main control techniques used throughout this work are backstepping control and Lyapunov stability theory.
In the first control strategy, we design a model-based nonlinear control law for the system with the exactly known passive mechanical that ensures asymptotical tracking. This first design is used as a stepping stone for the other control strategies in which we consider that uncertainties exist. In the next four control strategies, techniques for adaptive control of nonlinearly parameterized systems are applied to handle the unknown physical constant parameters that appear nonlinearly in the model. By exploiting the Lipschitzian nature or the concavity/convexity of the nonlinearly parameterized functions in the model, we design two adaptive controllers and two robust adaptive controllers that ensure practical tracking.
The next set of controllers are based on a NMES model that includes the uncertain muscle contractile mechanics. In this case, neural network-based controllers are designed to deal with this uncertainty. We consider here voltage inputs without and with saturation. For the latter, the Nussbaum gain is applied to handle the input saturation.
The last two control strategies are based on a more refined NMES model that accounts for the muscle activation dynamics. The main challenge here is that the activation state is unmeasurable. In the first design, we design a model-based observer that directly estimates the unmeasured state for a certain activation model. The second design introduces a nonlinear filter with an adaptive control law to handle parametric uncertainty in the activation dynamics. Both the observer- and filter-based, partial-state feedback controllers ensure asymptotical tracking.
Throughout this dissertation, the performance of the proposed control schemes are illustrated via computer simulations
Reduction of dimension for nonlinear dynamical systems
We consider reduction of dimension for nonlinear dynamical systems. We
demonstrate that in some cases, one can reduce a nonlinear system of equations
into a single equation for one of the state variables, and this can be useful
for computing the solution when using a variety of analytical approaches. In
the case where this reduction is possible, we employ differential elimination
to obtain the reduced system. While analytical, the approach is algorithmic,
and is implemented in symbolic software such as {\sc MAPLE} or {\sc SageMath}.
In other cases, the reduction cannot be performed strictly in terms of
differential operators, and one obtains integro-differential operators, which
may still be useful. In either case, one can use the reduced equation to both
approximate solutions for the state variables and perform chaos diagnostics
more efficiently than could be done for the original higher-dimensional system,
as well as to construct Lyapunov functions which help in the large-time study
of the state variables. A number of chaotic and hyperchaotic dynamical systems
are used as examples in order to motivate the approach.Comment: 16 pages, no figure
Distributed online estimation of biophysical neural networks
In this work, we propose a distributed adaptive observer for a class of
networked systems inspired by biophysical conductance-based neural network
models. Neural systems learn by adjusting intrinsic and synaptic weights in a
distributed fashion, with neuronal membrane voltages carrying information from
neighbouring neurons in the network. Using contraction analysis, we show that
this learning principle can be used to design an adaptive observer based on a
decentralized learning rule that greatly reduces the number of observer states
required for consistent exponential convergence of parameter estimates. This
novel design is relevant for biological, biomedical and neuromorphic
applications
Transverse exponential stability and applications
We investigate how the following properties are related to each other: i)-A
manifold is "transversally" exponentially stable; ii)-The "transverse"
linearization along any solution in the manifold is exponentially stable;
iii)-There exists a field of positive definite quadratic forms whose
restrictions to the directions transversal to the manifold are decreasing along
the flow. We illustrate their relevance with the study of exponential
incremental stability. Finally, we apply these results to two control design
problems, nonlinear observer design and synchronization. In particular, we
provide necessary and sufficient conditions for the design of nonlinear
observer and of nonlinear synchronizer with exponential convergence property
-Contraction in a Generalized Lurie System
We derive a sufficient condition for -contraction in a generalized Lurie
system, that is, the feedback connection of a nonlinear dynamical system and a
memoryless nonlinear function. For , this reduces to a sufficient
condition for standard contraction. For , this condition implies that
every bounded solution of the closed-loop system converges to an equilibrium,
which is not necessarily unique. We demonstrate the theoretical results by
analyzing -contraction in a biochemical control circuit with nonlinear
dissipation terms
Convex Optimization-based Controller Design for Stochastic Nonlinear Systems using Contraction Analysis
This paper presents an optimal feedback tracking controller for a class of ItĂ´ stochastic nonlinear systems, the design of which involves recasting a nonlinear system equation into a convex combination of multiple non-unique State-Dependent Coefficient (SDC) models. Its feedback gain and controller parameters are found by solving a convex optimization problem to minimize an upper bound of the steady-state tracking error. Multiple SDC parametrizations are utilized to provide a design flexibility to mitigate the effects of stochastic noise and to ensure that the system is controllable. Incremental stability of this controller is studied using stochastic contraction analysis and it is proven that the controlled trajectory exponentially converges to the desired trajectory with a non-vanishing error due to the linear matrix inequality state-dependent algebraic Riccati equation constraint. A discrete-time version of stochastic contraction analysis with respect to a state- and time-dependent metric is also presented in this paper. A simulation is performed to show the superiority of the proposed optimal feedback controller compared to a known exponentially-stabilizing nonlinear controller and a PID controller
- …