2 research outputs found

    Stability and stabilizability of discrete-time dual switching systems with application to sampled-data systems

    No full text
    International audienceIn this paper, stability and stabilizability of discrete-time dual switching linear systems is investigated. The switched systems under consideration have two switching variables. One of them is stochastic, described by an underlying Markov chain; the other one can be regarded either as a deterministic disturbance or as a control input, leading to stability or stabilizability problems, respectively. For the considered class of systems, sufficient conditions for mean square stability (with or without control gain synthesis) and mean square stabilizability are provided in terms of matrix inequalities. When the stochastic switching is driven by an independent identically distributed sequence, we establish simpler conditions without additional conservatism. Then, it is shown how the proposed framework can be used to study aperiodic sampled-data systems with stochastic computation times. The results are illustrated on examples borrowed from the literature

    Autonomous system control in unknown operating conditions

    Get PDF
    Autonomous systems have become an interconnected part of everyday life with the recent increases in computational power available for both onboard computers and offline data processing. The race by car manufacturers for level 5 (full) autonomy in self-driving cars is well underway and new flying taxi service startups are emerging every week, attracting billions in investments. Two main research communities, Optimal Control and Reinforcement Learning stand out in the field of autonomous systems, each with a vastly different perspective on the control problem. Controllers from the optimal control community are based on models and can be rigorously analyzed to ensure the stability of the system is maintained under certain operating conditions. Learning-based control strategies are often referred to as model-free and typically involve training a neural network to generate the required control actions through direct interactions with the system. This greatly reduces the design effort required to control complex systems. One common problem both learning- and model- based control solutions face is the dependency on a priori knowledge about the system and operating conditions such as possible internal component failures and external environmental disturbances. It is not possible to consider every possible operating scenario an autonomous system can encounter in the real world at design time. Models and simulators are approximations of reality and can only be created for known operating conditions. Autonomous system control in unknown operating conditions, where no a priori knowledge exists, is still an open problem for both communities and no control methods currently exist for such situations. Multiple model adaptive control is a modular control framework that divides the control problem into supervisory and low-level control, which allows for the combination of existing learning- and model-based control methods to overcome the disadvantages of using only one of these. The contributions of this thesis consist of five novel supervisory control architectures, which have been empirically shown to improve a system’s robustness to unknown operating conditions, and a novel low- level controller tuning algorithm that can reduce the number of required controllers compared to traditional tuning approaches. The presented methods apply to any autonomous system that can be controlled using model-based controllers and can be integrated alongside existing fault-tolerant control systems to improve robustness to unknown operating conditions. This impacts autonomous system designers by providing novel control mechanisms to improve a system’s robustness to unknown operating conditions
    corecore