18,914 research outputs found
Data-driven computation of invariant sets of discrete time-invariant black-box systems
We consider the problem of computing the maximal invariant set of
discrete-time black-box nonlinear systems without analytic dynamical models.
Under the assumption that the system is asymptotically stable, the maximal
invariant set coincides with the domain of attraction. A data-driven framework
relying on the observation of trajectories is proposed to compute
almost-invariant sets, which are invariant almost everywhere except a small
subset. Based on these observations, scenario optimization problems are
formulated and solved. We show that probabilistic invariance guarantees on the
almost-invariant sets can be established. To get explicit expressions of such
sets, a set identification procedure is designed with a verification step that
provides inner and outer approximations in a probabilistic sense. The proposed
data-driven framework is illustrated by several numerical examples.Comment: A shorter version with the title "Scenario-based set invariance
verification for black-box nonlinear systems" is published in the IEEE
Control Systems Letters (L-CSS
Robust Adaptive Control Barrier Functions: An Adaptive & Data-Driven Approach to Safety (Extended Version)
A new framework is developed for control of constrained nonlinear systems
with structured parametric uncertainties. Forward invariance of a safe set is
achieved through online parameter adaptation and data-driven model estimation.
The new adaptive data-driven safety paradigm is merged with a recent adaptive
control algorithm for systems nominally contracting in closed-loop. This
unification is more general than other safety controllers as closed-loop
contraction does not require the system be invertible or in a particular form.
Additionally, the approach is less expensive than nonlinear model predictive
control as it does not require a full desired trajectory, but rather only a
desired terminal state. The approach is illustrated on the pitch dynamics of an
aircraft with uncertain nonlinear aerodynamics.Comment: Added aCBF non-Lipschitz example and discussion on approach
implementatio
Machine Learning for Fluid Mechanics
The field of fluid mechanics is rapidly advancing, driven by unprecedented
volumes of data from field measurements, experiments and large-scale
simulations at multiple spatiotemporal scales. Machine learning offers a wealth
of techniques to extract information from data that could be translated into
knowledge about the underlying fluid mechanics. Moreover, machine learning
algorithms can augment domain knowledge and automate tasks related to flow
control and optimization. This article presents an overview of past history,
current developments, and emerging opportunities of machine learning for fluid
mechanics. It outlines fundamental machine learning methodologies and discusses
their uses for understanding, modeling, optimizing, and controlling fluid
flows. The strengths and limitations of these methods are addressed from the
perspective of scientific inquiry that considers data as an inherent part of
modeling, experimentation, and simulation. Machine learning provides a powerful
information processing framework that can enrich, and possibly even transform,
current lines of fluid mechanics research and industrial applications.Comment: To appear in the Annual Reviews of Fluid Mechanics, 202
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
- …