96,086 research outputs found
Investigation on energetic optimization problems of stochastic thermodynamics with iterative dynamic programming
The energetic optimization problem, e.g., searching for the optimal switch-
ing protocol of certain system parameters to minimize the input work, has been
extensively studied by stochastic thermodynamics. In current work, we study
this problem numerically with iterative dynamic programming. The model systems
under investigation are toy actuators consisting of spring-linked beads with
loading force imposed on both ending beads. For the simplest case, i.e., a
one-spring actuator driven by tuning the stiffness of the spring, we compare
the optimal control protocol of the stiffness for both the overdamped and the
underdamped situations, and discuss how inertial effects alter the
irreversibility of the driven process and thus modify the optimal protocol.
Then, we study the systems with multiple degrees of freedom by constructing
oligomer actuators, in which the harmonic interaction between the two ending
beads is tuned externally. With the same rated output work, actuators of
different constructions demand different minimal input work, reflecting the
influence of the internal degrees of freedom on the performance of the
actuators.Comment: 14 pages, 7 figures, communications in computational physics, in
pres
Dynamic consistency for Stochastic Optimal Control problems
For a sequence of dynamic optimization problems, we aim at discussing a
notion of consistency over time. This notion can be informally introduced as
follows. At the very first time step , the decision maker formulates an
optimization problem that yields optimal decision rules for all the forthcoming
time step ; at the next time step , he is able to
formulate a new optimization problem starting at time that yields a new
sequence of optimal decision rules. This process can be continued until final
time is reached. A family of optimization problems formulated in this way
is said to be time consistent if the optimal strategies obtained when solving
the original problem remain optimal for all subsequent problems. The notion of
time consistency, well-known in the field of Economics, has been recently
introduced in the context of risk measures, notably by Artzner et al. (2007)
and studied in the Stochastic Programming framework by Shapiro (2009) and for
Markov Decision Processes (MDP) by Ruszczynski (2009). We here link this notion
with the concept of "state variable" in MDP, and show that a significant class
of dynamic optimization problems are dynamically consistent, provided that an
adequate state variable is chosen
Stabilizing Stochastic Predictive Control under Bernoulli Dropouts
This article presents tractable and recursively feasible optimization-based
controllers for stochastic linear systems with bounded controls. The stochastic
noise in the plant is assumed to be additive, zero mean and fourth moment
bounded, and the control values transmitted over an erasure channel. Three
different transmission protocols are proposed having different requirements on
the storage and computational facilities available at the actuator. We optimize
a suitable stochastic cost function accounting for the effects of both the
stochastic noise and the packet dropouts over affine saturated disturbance
feedback policies. The proposed controllers ensure mean square boundedness of
the states in closed-loop for all positive values of control bounds and any
non-zero probability of successful transmission over a noisy control channel
Stochastic Nonlinear Model Predictive Control with Efficient Sample Approximation of Chance Constraints
This paper presents a stochastic model predictive control approach for
nonlinear systems subject to time-invariant probabilistic uncertainties in
model parameters and initial conditions. The stochastic optimal control problem
entails a cost function in terms of expected values and higher moments of the
states, and chance constraints that ensure probabilistic constraint
satisfaction. The generalized polynomial chaos framework is used to propagate
the time-invariant stochastic uncertainties through the nonlinear system
dynamics, and to efficiently sample from the probability densities of the
states to approximate the satisfaction probability of the chance constraints.
To increase computational efficiency by avoiding excessive sampling, a
statistical analysis is proposed to systematically determine a-priori the least
conservative constraint tightening required at a given sample size to guarantee
a desired feasibility probability of the sample-approximated chance constraint
optimization problem. In addition, a method is presented for sample-based
approximation of the analytic gradients of the chance constraints, which
increases the optimization efficiency significantly. The proposed stochastic
nonlinear model predictive control approach is applicable to a broad class of
nonlinear systems with the sufficient condition that each term is analytic with
respect to the states, and separable with respect to the inputs, states and
parameters. The closed-loop performance of the proposed approach is evaluated
using the Williams-Otto reactor with seven states, and ten uncertain parameters
and initial conditions. The results demonstrate the efficiency of the approach
for real-time stochastic model predictive control and its capability to
systematically account for probabilistic uncertainties in contrast to a
nonlinear model predictive control approaches.Comment: Submitted to Journal of Process Contro
Motion Planning of Uncertain Ordinary Differential Equation Systems
This work presents a novel motion planning framework, rooted in nonlinear programming theory, that treats uncertain fully and under-actuated dynamical systems described by ordinary differential equations. Uncertainty in multibody dynamical systems comes from various sources, such as: system parameters, initial conditions, sensor and actuator noise, and external forcing. Treatment of uncertainty in design is of paramount practical importance because all real-life systems are affected by it, and poor robustness and suboptimal performance result if it’s not accounted for in a given design. In this work uncertainties are modeled using Generalized Polynomial Chaos and are solved quantitatively using a least-square collocation method. The computational efficiency of this approach enables the inclusion of uncertainty statistics in the nonlinear programming optimization process. As such, the proposed framework allows the user to pose, and answer, new design questions related to uncertain dynamical systems.
Specifically, the new framework is explained in the context of forward, inverse, and hybrid dynamics formulations. The forward dynamics formulation, applicable to both fully and under-actuated systems, prescribes deterministic actuator inputs which yield uncertain state trajectories. The inverse dynamics formulation is the dual to the forward dynamic, and is only applicable to fully-actuated systems; deterministic state trajectories are prescribed and yield uncertain actuator inputs. The inverse dynamics formulation is more computationally efficient as it requires only algebraic evaluations and completely avoids numerical integration. Finally, the hybrid dynamics formulation is applicable to under-actuated systems where it leverages the benefits of inverse dynamics for actuated joints and forward dynamics for unactuated joints; it prescribes actuated state and unactuated input trajectories which yield uncertain unactuated states and actuated inputs.
The benefits of the ability to quantify uncertainty when planning the motion of multibody dynamic systems are illustrated through several case-studies. The resulting designs determine optimal motion plans—subject to deterministic and statistical constraints—for all possible systems within the probability space
- …