68,482 research outputs found
Probabilistic Linear Solvers: A Unifying View
Several recent works have developed a new, probabilistic interpretation for
numerical algorithms solving linear systems in which the solution is inferred
in a Bayesian framework, either directly or by inferring the unknown action of
the matrix inverse. These approaches have typically focused on replicating the
behavior of the conjugate gradient method as a prototypical iterative method.
In this work surprisingly general conditions for equivalence of these disparate
methods are presented. We also describe connections between probabilistic
linear solvers and projection methods for linear systems, providing a
probabilistic interpretation of a far more general class of iterative methods.
In particular, this provides such an interpretation of the generalised minimum
residual method. A probabilistic view of preconditioning is also introduced.
These developments unify the literature on probabilistic linear solvers, and
provide foundational connections to the literature on iterative solvers for
linear systems
Probabilistic Methods for Model Validation
This dissertation develops a probabilistic method for validation and verification (V&V) of uncertain nonlinear systems. Existing systems-control literature on model and controller V&V either deal with linear systems with norm-bounded uncertainties,or consider nonlinear systems in set-based and moment based framework. These existing methods deal with model invalidation or falsification, rather than assessing the quality of a model with respect to measured data. In this dissertation, an axiomatic framework for model validation is proposed in probabilistically relaxed sense, that
instead of simply invalidating a model, seeks to quantify the "degree of validation".
To develop this framework, novel algorithms for uncertainty propagation have been proposed for both deterministic and stochastic nonlinear systems in continuous time. For the deterministic flow, we compute the time-varying joint probability density functions over the state space, by solving the Liouville equation via method-of-characteristics. For the stochastic flow, we propose an approximation algorithm that combines the method-of-characteristics solution of Liouville equation with the Karhunen-Lo eve expansion of process noise, thus enabling an indirect solution of
Fokker-Planck equation, governing the evolution of joint probability density functions. The efficacy of these algorithms are demonstrated for risk assessment in Mars entry-descent-landing, and for nonlinear estimation. Next, the V&V problem is formulated in terms of Monge-Kantorovich optimal transport, naturally giving rise to a metric, called Wasserstein metric, on the space of probability densities. It is shown that the resulting computation leads to solving a linear program at each time of measurement availability, and computational complexity results for the same are derived. Probabilistic guarantees in average and worst case sense, are given for the validation oracle resulting from the proposed method. The framework is demonstrated for nonlinear robustness veri cation of F-16 flight controllers, subject to probabilistic uncertainties.
Frequency domain interpretations for the proposed framework are derived for
linear systems, and its connections with existing nonlinear model validation methods
are pointed out. In particular, we show that the asymptotic Wasserstein gap between
two single-output linear time invariant systems excited by Gaussian white noise,
is the difference between their average gains, up to a scaling by the strength of
the input noise. A geometric interpretation of this result allows us to propose an
intrinsic normalization of the Wasserstein gap, which in turn allows us to compare it
with classical systems-theoretic metrics like v-gap. Next, it is shown that the optimal
transport map can be used to automatically refine the model. This model refinement
formulation leads to solving a non-smooth convex optimization problem. Examples
are given to demonstrate how proximal operator splitting based computation enables
numerically solving the same. This method is applied for nite-time feedback control
of probability density functions, and for data driven modeling of dynamical systems
Probabilistic Numerics and Uncertainty in Computations
We deliver a call to arms for probabilistic numerical methods: algorithms for
numerical tasks, including linear algebra, integration, optimization and
solving differential equations, that return uncertainties in their
calculations. Such uncertainties, arising from the loss of precision induced by
numerical calculation with limited time or hardware, are important for much
contemporary science and industry. Within applications such as climate science
and astrophysics, the need to make decisions on the basis of computations with
large and complex data has led to a renewed focus on the management of
numerical uncertainty. We describe how several seminal classic numerical
methods can be interpreted naturally as probabilistic inference. We then show
that the probabilistic view suggests new algorithms that can flexibly be
adapted to suit application specifics, while delivering improved empirical
performance. We provide concrete illustrations of the benefits of probabilistic
numeric algorithms on real scientific problems from astrometry and astronomical
imaging, while highlighting open problems with these new algorithms. Finally,
we describe how probabilistic numerical methods provide a coherent framework
for identifying the uncertainty in calculations performed with a combination of
numerical algorithms (e.g. both numerical optimisers and differential equation
solvers), potentially allowing the diagnosis (and control) of error sources in
computations.Comment: Author Generated Postprint. 17 pages, 4 Figures, 1 Tabl
A probabilistic interpretation of set-membership filtering: application to polynomial systems through polytopic bounding
Set-membership estimation is usually formulated in the context of set-valued
calculus and no probabilistic calculations are necessary. In this paper, we
show that set-membership estimation can be equivalently formulated in the
probabilistic setting by employing sets of probability measures. Inference in
set-membership estimation is thus carried out by computing expectations with
respect to the updated set of probability measures P as in the probabilistic
case. In particular, it is shown that inference can be performed by solving a
particular semi-infinite linear programming problem, which is a special case of
the truncated moment problem in which only the zero-th order moment is known
(i.e., the support). By writing the dual of the above semi-infinite linear
programming problem, it is shown that, if the nonlinearities in the measurement
and process equations are polynomial and if the bounding sets for initial
state, process and measurement noises are described by polynomial inequalities,
then an approximation of this semi-infinite linear programming problem can
efficiently be obtained by using the theory of sum-of-squares polynomial
optimization. We then derive a smart greedy procedure to compute a polytopic
outer-approximation of the true membership-set, by computing the minimum-volume
polytope that outer-bounds the set that includes all the means computed with
respect to P
Probabilistic Interpretation of Linear Solvers
This manuscript proposes a probabilistic framework for algorithms that
iteratively solve unconstrained linear problems with positive definite
for . The goal is to replace the point estimates returned by existing
methods with a Gaussian posterior belief over the elements of the inverse of
, which can be used to estimate errors. Recent probabilistic interpretations
of the secant family of quasi-Newton optimization algorithms are extended.
Combined with properties of the conjugate gradient algorithm, this leads to
uncertainty-calibrated methods with very limited cost overhead over conjugate
gradients, a self-contained novel interpretation of the quasi-Newton and
conjugate gradient algorithms, and a foundation for new nonlinear optimization
methods.Comment: final version, in press at SIAM J Optimizatio
Abstraction-Based Data-Driven Control
Our world is living a paradigm shift in technology policy, often referred to as the Cyber-Physical Revolution or Industry 4.0.
Nowadays, Cyber-Physical Systems are ubiquitous in modern control engineering, including automobiles, aircraft, building control systems, chemical plants, transportation systems, and so on. The interactions of the physical processes with the machines that control them are becoming increasingly complex, and in a growing number of situations either the model of the system is unavailable, or it is too difficult to describe accurately. Therefore, embedded computers need to "learn" the optimal way to control the systems by the mere observation of data.
What seems the best approach to control these complex systems is often by discretizing the different variables, thus transforming the model into a
combinatorial problem on a finite-state automaton, which is called an abstraction of the real system.
Until now, this approach, often referred to as "abstraction-based control" or "symbolic control", has not been proved useful beyond small academic examples.
In this project I aim to show the potential of this approach by implementing a novel data-driven approach based on a probabilistic interpretation of the discretization error.
I have developed a toolbox (github.com/davidedl-ucl/master-thesis) implementing this kind of control with the aim of integrating it in the Dionysos software github.com/dionysos-dev).
With this software, I succeeded in efficiently solving problems for non-linear control systems such as a path planning for an autonomous vehicle and a cart-pole balancing problem.
The long-term objective of this project is to improve the methods implemented in my current software by employing a variable discretization of the state space and to consider complex specifications such as LTL formulas.Our world is living a paradigm shift in technology policy, often referred to as the Cyber-Physical Revolution or Industry 4.0.
Nowadays, Cyber-Physical Systems are ubiquitous in modern control engineering, including automobiles, aircraft, building control systems, chemical plants, transportation systems, and so on. The interactions of the physical processes with the machines that control them are becoming increasingly complex, and in a growing number of situations either the model of the system is unavailable, or it is too difficult to describe accurately. Therefore, embedded computers need to "learn" the optimal way to control the systems by the mere observation of data.
What seems the best approach to control these complex systems is often by discretizing the different variables, thus transforming the model into a
combinatorial problem on a finite-state automaton, which is called an abstraction of the real system.
Until now, this approach, often referred to as "abstraction-based control" or "symbolic control", has not been proved useful beyond small academic examples.
In this project I aim to show the potential of this approach by implementing a novel data-driven approach based on a probabilistic interpretation of the discretization error.
I have developed a toolbox (github.com/davidedl-ucl/master-thesis) implementing this kind of control with the aim of integrating it in the Dionysos software github.com/dionysos-dev).
With this software, I succeeded in efficiently solving problems for non-linear control systems such as a path planning for an autonomous vehicle and a cart-pole balancing problem.
The long-term objective of this project is to improve the methods implemented in my current software by employing a variable discretization of the state space and to consider complex specifications such as LTL formulas
- …