22 research outputs found

    Linearly Solvable Stochastic Control Lyapunov Functions

    Get PDF
    This paper presents a new method for synthesizing stochastic control Lyapunov functions for a class of nonlinear stochastic control systems. The technique relies on a transformation of the classical nonlinear Hamilton-Jacobi-Bellman partial differential equation to a linear partial differential equation for a class of problems with a particular constraint on the stochastic forcing. This linear partial differential equation can then be relaxed to a linear differential inclusion, allowing for relaxed solutions to be generated using sum of squares programming. The resulting relaxed solutions are in fact viscosity super/subsolutions, and by the maximum principle are pointwise upper and lower bounds to the underlying value function, even for coarse polynomial approximations. Furthermore, the pointwise upper bound is shown to be a stochastic control Lyapunov function, yielding a method for generating nonlinear controllers with pointwise bounded distance from the optimal cost when using the optimal controller. These approximate solutions may be computed with non-increasing error via a hierarchy of semidefinite optimization problems. Finally, this paper develops a-priori bounds on trajectory suboptimality when using these approximate value functions, as well as demonstrates that these methods, and bounds, can be applied to a more general class of nonlinear systems not obeying the constraint on stochastic forcing. Simulated examples illustrate the methodology.Comment: Published in SIAM Journal of Control and Optimizatio

    Understanding robust control theory via stick balancing

    Get PDF
    Robust control theory studies the effect of noise, disturbances, and other uncertainty on system performance. Despite growing recognition across science and engineering that robustness and efficiency tradeoffs dominate the evolution and design of complex systems, the use of robust control theory remains limited, partly because the mathematics involved is relatively inaccessible to nonexperts, and the important concepts have been inexplicable without a fairly rich mathematics background. This paper aims to begin changing that by presenting the most essential concepts in robust control using human stick balancing, a simple case study popular in both the sensorimotor control literature and extremely familiar to engineers. With minimal and familiar models and mathematics, we can explore the impact of unstable poles and zeros, delays, and noise, which can then be easily verified with simple experiments using a standard extensible pointer. Despite its simplicity, this case study has extremes of robustness and fragility that are initially counter-intuitive but for which simple mathematics and experiments are clear and compelling. The theory used here has been well-known for many decades, and the cart-pendulum example is a standard in undergrad controls courses, yet a careful reconsidering of both leads to striking new insights that we argue are of great pedagogical value

    Optimal Controller Synthesis for Nonlinear Systems

    Get PDF
    Optimal controller synthesis is a challenging problem to solve. However, in many applications such as robotics, nonlinearity is unavoidable. Apart from optimality, correctness of the system behaviors with respect to system specifications such as stability and obstacle avoidance is vital for engineering applications. Many existing techniques consider either the optimality or the correctness of system behavior. Rarely, a tool exists that considers both. Furthermore, most existing optimal controller synthesis techniques are not scalable because they either require ad-hoc design or they suffer from the curse of dimensionality. This thesis aims to close these gaps by proposing optimal controller synthesis techniques for two classes of nonlinear systems: linearly solvable nonlinear systems and hybrid nonlinear systems. Linearly solvable systems have associated Hamilton- Jacobi-Bellman (HJB) equations that can be transformed from the original nonlinear partial differential equation (PDE) into a linear PDE through a logarithmic transformation. The first part of this thesis presets two methods to synthesize optimal controller for linearly solvable nonlinear systems. The first technique uses a hierarchy of sums-of-square programs to compute a sequence of suboptimal controllers that have non-increasing suboptimality for first exit and finite horizon problems. This technique is the first systematic approach to provide stability and suboptimal performance guarantees for stochastic nonlinear systems in one framework. The second technique uses the low rank tensor decomposition framework to solve the linear HJB equation for first exit, finite horizon, and infinite horizon problems. This technique scale linearly with dimensions, alleviating the curse of dimensionality and enabling us to solve the linear HJB equation for a quadcopter model that is a twelve-dimensional system on a personal laptop. A new algorithm is proposed for a key step in the controller synthesis algorithm to solve the ill-conditioning issue that arises in the original algorithm. A MATLAB toolbox that implements the algorithms is developed, and the performance of these algorithms is illustrated by a few engineering examples. Apart from stability, in many applications, more complex specifications such as obstacle avoidance, reachability, and surveillance are required. The second part of the thesis describes methods to synthesize optimal controllers for hybrid nonlinear systems with quantitative objectives (i.e., minimizing cost) and qualitative objectives (i.e., satisfying specifications). This thesis focuses on two types of qualitative objectives, regular objectives, and ω-regular objectives. Regular objectives capture bounded time behavior such as reachability, and &#969;-regular objectives capture long term behavior such as surveillance. For both types of objectives, an abstraction-refinement procedure that preserves the cost is developed. A two-player game is solved on the product of the abstract system and the given objectives to synthesize the suboptimal controller for the hybrid nonlinear system. By refining the abstract system, the algorithms are guaranteed to converge to the optimal cost and return the optimal controller if the original systems are robust with respect to the initial states and the optimal controller inputs. The proposed technique is the first abstraction-refinement based technique to combine both quantitative and qualitative objectives into one framework. A Python implementation of the algorithms are developed, and a few engineering examples are presented to illustrate the performance of these algorithms.</p

    Suboptimal stabilizing controllers for linearly solvable system

    Get PDF
    This paper presents a novel method to synthesize stochastic control Lyapunov functions for a class of nonlinear, stochastic control systems. In this work, the classical nonlinear Hamilton-Jacobi-Bellman partial differential equation is transformed into a linear partial differential equation for a class of systems with a particular constraint on the stochastic disturbance. It is shown that this linear partial differential equation can be relaxed to a linear differential inclusion, allowing for approximating polynomial solutions to be generated using sum of squares programming. It is shown that the resulting solutions are stochastic control Lyapunov functions with a number of compelling properties. In particular, a-priori bounds on trajectory suboptimality are shown for these approximate value functions. The result is a technique whereby approximate solutions may be computed with non-increasing error via a hierarchy of semidefinite optimization problems

    Understanding robust control theory via stick balancing

    Get PDF
    Robust control theory studies the effect of noise, disturbances, and other uncertainty on system performance. Despite growing recognition across science and engineering that robustness and efficiency tradeoffs dominate the evolution and design of complex systems, the use of robust control theory remains limited, partly because the mathematics involved is relatively inaccessible to nonexperts, and the important concepts have been inexplicable without a fairly rich mathematics background. This paper aims to begin changing that by presenting the most essential concepts in robust control using human stick balancing, a simple case study popular in both the sensorimotor control literature and extremely familiar to engineers. With minimal and familiar models and mathematics, we can explore the impact of unstable poles and zeros, delays, and noise, which can then be easily verified with simple experiments using a standard extensible pointer. Despite its simplicity, this case study has extremes of robustness and fragility that are initially counter-intuitive but for which simple mathematics and experiments are clear and compelling. The theory used here has been well-known for many decades, and the cart-pendulum example is a standard in undergrad controls courses, yet a careful reconsidering of both leads to striking new insights that we argue are of great pedagogical value

    Design Guidelines For Sequestration Feedback Networks

    Get PDF
    Integral control is commonly used in mechanical and electrical systems to ensure perfect adaptation. A proposed design of integral control for synthetic biological systems employs the sequestration of two biochemical controller species. The unbound amount of controller species captures the integral of the error between the current and the desired state of the system. However, implementing integral control inside bacterial cells using sequestration feedback has been challenging due to the controller molecules being degraded and diluted. Furthermore, integral control can only be achieved under stability conditions that not all sequestration feedback networks fulfill. In this work, we give guidelines for ensuring stability and good performance (small steady-state error) in sequestration feedback networks. Our guidelines provide simple tuning options to obtain a flexible and practical biological implementation of sequestration feedback control. Using tools and metrics from control theory, we pave the path for the systematic design of synthetic biological systems

    Hard Limits And Performance Tradeoffs In A Class Of Sequestration Feedback Systems

    Get PDF
    Feedback regulation is pervasive in biology at both the organismal and cellular level. In this article, we explore the properties of a particular biomolecular feedback mechanism implemented using the sequestration binding of two molecules. Our work develops an analytic framework for understanding the hard limits, performance tradeoffs, and architectural properties of this simple model of biological feedback control. Using tools from control theory, we show that there are simple parametric relationships that determine both the stability and the performance of these systems in terms of speed, robustness, steady-state error, and leakiness. These findings yield a holistic understanding of the behavior of sequestration feedback and contribute to a more general theory of biological control systems

    Resilience in Large Scale Distributed Systems

    Get PDF
    Distributed systems are comprised of multiple subsystems that interact in two distinct ways: (1) physical interactions and (2) cyber interactions; i.e. sensors, actuators and computers controlling these subsystems, and the network over which they communicate. A broad class of cyber-physical systems (CPS) are described by such interactions, such as the smart grid, platoons of autonomous vehicles and the sensorimotor system. This paper will survey recent progress in developing a coherent mathematical framework that describes the rich CPS “design space” of fundamental limits and tradeoffs between efficiency, robustness, adaptation, verification and scalability. Whereas most research treats at most one of these issues, we attempt a holistic approach in examining these metrics. In particular, we will argue that a control architecture that emphasizes scalability leads to improvements in robustness, adaptation, and verification, all the while having only minor effects on efficiency – i.e. through the choice of a new architecture, we believe that we are able to bring a system closer to the true fundamental hard limits of this complex design space

    Design Guidelines For Sequestration Feedback Networks

    Get PDF
    Integral control is commonly used in mechanical and electrical systems to ensure perfect adaptation. A proposed design of integral control for synthetic biological systems employs the sequestration of two biochemical controller species. The unbound amount of controller species captures the integral of the error between the current and the desired state of the system. However, implementing integral control inside bacterial cells using sequestration feedback has been challenging due to the controller molecules being degraded and diluted. Furthermore, integral control can only be achieved under stability conditions that not all sequestration feedback networks fulfill. In this work, we give guidelines for ensuring stability and good performance (small steady-state error) in sequestration feedback networks. Our guidelines provide simple tuning options to obtain a flexible and practical biological implementation of sequestration feedback control. Using tools and metrics from control theory, we pave the path for the systematic design of synthetic biological systems
    corecore