851 research outputs found
Discrete time optimal control with frequency constraints for non-smooth systems
We present a Pontryagin maximum principle for discrete time optimal control
problems with (a) pointwise constraints on the control actions and the states,
(b) frequency constraints on the control and the state trajectories, and (c)
nonsmooth dynamical systems. Pointwise constraints on the states and the
control actions represent desired and/or physical limitations on the states and
the control values; such constraints are important and are widely present in
the optimal control literature. Constraints of the type (b), while less
standard in the literature, effectively serve the purpose of describing
important spectral properties of inertial actuators and systems. The
conjunction of constraints of the type (a) and (b) is a relatively new
phenomenon in optimal control but are important for the synthesis control
trajectories with a high degree of fidelity. The maximum principle established
here provides first order necessary conditions for optimality that serve as a
starting point for the synthesis of control trajectories corresponding to a
large class of constrained motion planning problems that have high accuracy in
a computationally tractable fashion. Moreover, the ability to handle a
reasonably large class of nonsmooth dynamical systems that arise in practice
ensures broad applicability our theory, and we include several illustrations of
our results on standard problems
A discrete-time Pontryagin maximum principle under rate constraints
Limited bandwidth and limited saturation in actuators are practical concerns
in control systems. Mathematically, these limitations manifest as constraints
being imposed on the control actions, their rates of change, and more
generally, the global behavior of their paths. While the problem of actuator
saturation has been studied extensively, little attention has been devoted to
the problem of actuators having limited bandwidth. While attempts have been
made in the direction of incorporating frequency constraints on state-action
trajectories before, rate constraints on the control at the design stage have
not been studied extensively in the discrete-time regime. This article
contributes toward filling this lacuna. In particular, we establish a new
discrete-time Pontryagin maximum principle with rate constraints being imposed
on the control trajectories, and derive first-order necessary conditions for
optimality. A brief discussion on the existence of optimal control is included,
and numerical examples are provided to illustrate the results
Quantum Control Landscapes
Numerous lines of experimental, numerical and analytical evidence indicate
that it is surprisingly easy to locate optimal controls steering quantum
dynamical systems to desired objectives. This has enabled the control of
complex quantum systems despite the expense of solving the Schrodinger equation
in simulations and the complicating effects of environmental decoherence in the
laboratory. Recent work indicates that this simplicity originates in universal
properties of the solution sets to quantum control problems that are
fundamentally different from their classical counterparts. Here, we review
studies that aim to systematically characterize these properties, enabling the
classification of quantum control mechanisms and the design of globally
efficient quantum control algorithms.Comment: 45 pages, 15 figures; International Reviews in Physical Chemistry,
Vol. 26, Iss. 4, pp. 671-735 (2007
Optimal control of a flywheel-based automotive kinetic energy recovery system
This thesis addresses the control issues surrounding flywheel-based Kinetic Energy Recovery Systems (KERS) for use in automotive vehicle applications. Particular emphasis is placed on optimal control of a KERS using a Continuously Variable Transmission (CVT) for volume car production, and a wholly simulation-based approach is adopted. Following consideration of the general control issues surrounding KERS operation, a simplified system model is adopted, and the scope for use of optimal control theory is explored. Both Pontryaginâs Maximum Principle, and Dynamic Programming methods are examined, and the need for numerical implementation established. With Dynamic Programming seen as the most likely route to practical implementation for realistic nonlinear models, the thesis explores several new strategies for numerical implementation of Dynamic Programming, capable of being applied to KERS control of varying degrees of complexity. The best form of numerical implementation identified (in terms of accuracy and efficiency) is then used to establish via simulation, the benefits of optimal KERS control in comparison with a more conventional non-optimal strategy, showing clear benefits of using optimal control
- âŠ