123,672 research outputs found

    Computer Architectures to Close the Loop in Real-time Optimization

    Get PDF
    © 2015 IEEE.Many modern control, automation, signal processing and machine learning applications rely on solving a sequence of optimization problems, which are updated with measurements of a real system that evolves in time. The solutions of each of these optimization problems are then used to make decisions, which may be followed by changing some parameters of the physical system, thereby resulting in a feedback loop between the computing and the physical system. Real-time optimization is not the same as fast optimization, due to the fact that the computation is affected by an uncertain system that evolves in time. The suitability of a design should therefore not be judged from the optimality of a single optimization problem, but based on the evolution of the entire cyber-physical system. The algorithms and hardware used for solving a single optimization problem in the office might therefore be far from ideal when solving a sequence of real-time optimization problems. Instead of there being a single, optimal design, one has to trade-off a number of objectives, including performance, robustness, energy usage, size and cost. We therefore provide here a tutorial introduction to some of the questions and implementation issues that arise in real-time optimization applications. We will concentrate on some of the decisions that have to be made when designing the computing architecture and algorithm and argue that the choice of one informs the other

    Local, hierarchic, and iterative reconstructors for adaptive optics

    Get PDF
    Adaptive optics systems for future large optical telescopes may require thousands of sensors and actuators. Optimal reconstruction of phase errors using relative measurements requires feedback from every sensor to each actuator, resulting in computational scaling for n actuators of n^2 . The optimum local reconstructor is investigated, wherein each actuator command depends only on sensor information in a neighboring region. The resulting performance degradation on global modes is quantified analytically, and two approaches are considered for recovering "global" performance. Combining local and global estimators in a two-layer hierarchic architecture yields computations scaling with n^4/3 ; extending this approach to multiple layers yields linear scaling. An alternative approach that maintains a local structure is to allow actuator commands to depend on both local sensors and prior local estimates. This iterative approach is equivalent to a temporal low-pass filter on global information and gives a scaling of n^3/2 . The algorithms are simulated by using data from the Palomar Observatory adaptive optics system. The analysis is general enough to also be applicable to active optics or other systems with many sensors and actuators

    Differential Dynamic Programming for time-delayed systems

    Full text link
    Trajectory optimization considers the problem of deciding how to control a dynamical system to move along a trajectory which minimizes some cost function. Differential Dynamic Programming (DDP) is an optimal control method which utilizes a second-order approximation of the problem to find the control. It is fast enough to allow real-time control and has been shown to work well for trajectory optimization in robotic systems. Here we extend classic DDP to systems with multiple time-delays in the state. Being able to find optimal trajectories for time-delayed systems with DDP opens up the possibility to use richer models for system identification and control, including recurrent neural networks with multiple timesteps in the state. We demonstrate the algorithm on a two-tank continuous stirred tank reactor. We also demonstrate the algorithm on a recurrent neural network trained to model an inverted pendulum with position information only.Comment: 7 pages, 6 figures, conference, Decision and Control (CDC), 2016 IEEE 55th Conference o
    • …
    corecore