14,970 research outputs found
Scalable Approach to Uncertainty Quantification and Robust Design of Interconnected Dynamical Systems
Development of robust dynamical systems and networks such as autonomous
aircraft systems capable of accomplishing complex missions faces challenges due
to the dynamically evolving uncertainties coming from model uncertainties,
necessity to operate in a hostile cluttered urban environment, and the
distributed and dynamic nature of the communication and computation resources.
Model-based robust design is difficult because of the complexity of the hybrid
dynamic models including continuous vehicle dynamics, the discrete models of
computations and communications, and the size of the problem. We will overview
recent advances in methodology and tools to model, analyze, and design robust
autonomous aerospace systems operating in uncertain environment, with stress on
efficient uncertainty quantification and robust design using the case studies
of the mission including model-based target tracking and search, and trajectory
planning in uncertain urban environment. To show that the methodology is
generally applicable to uncertain dynamical systems, we will also show examples
of application of the new methods to efficient uncertainty quantification of
energy usage in buildings, and stability assessment of interconnected power
networks
Combining Homotopy Methods and Numerical Optimal Control to Solve Motion Planning Problems
This paper presents a systematic approach for computing local solutions to
motion planning problems in non-convex environments using numerical optimal
control techniques. It extends the range of use of state-of-the-art numerical
optimal control tools to problem classes where these tools have previously not
been applicable. Today these problems are typically solved using motion
planners based on randomized or graph search. The general principle is to
define a homotopy that perturbs, or preferably relaxes, the original problem to
an easily solved problem. By combining a Sequential Quadratic Programming (SQP)
method with a homotopy approach that gradually transforms the problem from a
relaxed one to the original one, practically relevant locally optimal solutions
to the motion planning problem can be computed. The approach is demonstrated in
motion planning problems in challenging 2D and 3D environments, where the
presented method significantly outperforms a state-of-the-art open-source
optimizing sampled-based planner commonly used as benchmark
Information-Theoretic Stochastic Optimal Control via Incremental Sampling-based Algorithms
This paper considers optimal control of dynamical systems which are
represented by nonlinear stochastic differential equations. It is well-known
that the optimal control policy for this problem can be obtained as a function
of a value function that satisfies a nonlinear partial differential equation,
namely, the Hamilton-Jacobi-Bellman equation. This nonlinear PDE must be solved
backwards in time, and this computation is intractable for large scale systems.
Under certain assumptions, and after applying a logarithmic transformation, an
alternative characterization of the optimal policy can be given in terms of a
path integral. Path Integral (PI) based control methods have recently been
shown to provide elegant solutions to a broad class of stochastic optimal
control problems. One of the implementation challenges with this formalism is
the computation of the expectation of a cost functional over the trajectories
of the unforced dynamics. Computing such expectation over trajectories that are
sampled uniformly may induce numerical instabilities due to the exponentiation
of the cost. Therefore, sampling of low-cost trajectories is essential for the
practical implementation of PI-based methods. In this paper, we use incremental
sampling-based algorithms to sample useful trajectories from the unforced
system dynamics, and make a novel connection between Rapidly-exploring Random
Trees (RRTs) and information-theoretic stochastic optimal control. We show the
results from the numerical implementation of the proposed approach to several
examples.Comment: 18 page
Semidefinite Relaxations for Stochastic Optimal Control Policies
Recent results in the study of the Hamilton Jacobi Bellman (HJB) equation
have led to the discovery of a formulation of the value function as a linear
Partial Differential Equation (PDE) for stochastic nonlinear systems with a
mild constraint on their disturbances. This has yielded promising directions
for research in the planning and control of nonlinear systems. This work
proposes a new method obtaining approximate solutions to these linear
stochastic optimal control (SOC) problems. A candidate polynomial with variable
coefficients is proposed as the solution to the SOC problem. A Sum of Squares
(SOS) relaxation is then taken to the partial differential constraints, leading
to a hierarchy of semidefinite relaxations with improving sub-optimality gap.
The resulting approximate solutions are shown to be guaranteed over- and
under-approximations for the optimal value function.Comment: Preprint. Accepted to American Controls Conference (ACC) 2014 in
Portland, Oregon. 7 pages, colo
- …