5,702 research outputs found

    Stabilization of hybrid systems by intermittent feedback controls based on discrete-time observations with a time delay

    Get PDF
    This paper mainly investigates stabilization of hybrid stochastic differential equations (SDEs) via periodically intermittent feedback controls based on discrete-time state observations with a time delay. First, by using the theory of M-matrix and intermittent control strategy, we establish sufficient conditions for the stability of hybrid SDEs. Then, we prove the intermittent stabilization for a given unstable nonlinear hybrid SDE by comparison theorem. Two numerical examples are discussed to support our results of theoretical analysis

    Robust stabilization of hybrid uncertain stochastic systems by discrete-time feedback control

    Get PDF
    This paper aims to stabilize hybrid stochastic differential equations (SDEs) with norm bounded uncertainties by feedback controls based on the discrete-time observations of both state and mode. The control structure appears only in the drift part (the deterministic part) of an SDE and the controlled system will be robustly exponentially stable in mean-square. Our stabilization criteria are in terms of linear matrix inequalities (LMIs) whence the feedback controls can be designed more easily in practice. An example is given to illustrate the effectiveness of our results

    Asymptotic stabilization of continuous-time periodic stochastic systems by feedback control based on periodic discrete-time observations

    Get PDF
    In 2013, Mao initiated the study of stabilization of continuoustime hybrid stochastic differential equations (SDEs) by feedback control based on discrete-time state observations. In recent years, this study has been further developed while using a constant observation interval. However, time-varying observation frequencies have not been discussed for this study. Particularly for non-autonomous periodic systems, it’s more sensible to consider the timevarying property and observe the system at periodic time-varying frequencies, in terms of control efficiency. This paper introduces a periodic observation interval sequence, and investigates how to stabilize a periodic SDE by feedback control based on periodic observations, in the sense that, the controlled system achieve Lp-stability for p > 1, almost sure asymptotic stability and pth moment asymptotic stability for p ≥ 2. This paper uses the Lyapunov method and inequalities to derive the theory. We also verify the existence of the observation interval sequence and explains how to calculate it. Finally, an illustrative example is given after a useful corollary. By considering the time-varying property of the system, we reduce the observation frequency dramatically and hence reduce the observational cost for control

    Exponential Stabilisation of Continuous-time Periodic Stochastic Systems by Feedback Control Based on Periodic Discrete-time Observations

    Get PDF
    Since Mao in 2013 discretised the system observations for stabilisation problem of hybrid SDEs (stochastic differential equations with Markovian switching) by feedback control, the study of this topic using a constant observation frequency has been further developed. However, time-varying observation frequencies have not been considered. Particularly, an observational more efficient way is to consider the time-varying property of the system and observe a periodic SDE system at the periodic time-varying frequencies. This study investigates how to stabilise a periodic hybrid SDE by a periodic feedback control, based on periodic discrete-time observations. This study provides sufficient conditions under which the controlled system can achieve pth moment exponential stability for p > 1 and almost sure exponential stability. Lyapunov's method and inequalities are main tools for derivation and analysis. The existence of observation interval sequences is verified and one way of its calculation is provided. Finally, an example is given for illustration. Their new techniques not only reduce observational cost by reducing observation frequency dramatically but also offer flexibility on system observation settings. This study allows readers to set observation frequencies according to their needs to some extent

    Almost Sure Stabilization for Adaptive Controls of Regime-switching LQ Systems with A Hidden Markov Chain

    Full text link
    This work is devoted to the almost sure stabilization of adaptive control systems that involve an unknown Markov chain. The control system displays continuous dynamics represented by differential equations and discrete events given by a hidden Markov chain. Different from previous work on stabilization of adaptive controlled systems with a hidden Markov chain, where average criteria were considered, this work focuses on the almost sure stabilization or sample path stabilization of the underlying processes. Under simple conditions, it is shown that as long as the feedback controls have linear growth in the continuous component, the resulting process is regular. Moreover, by appropriate choice of the Lyapunov functions, it is shown that the adaptive system is stabilizable almost surely. As a by-product, it is also established that the controlled process is positive recurrent

    On stabilization of bilinear uncertain time-delay stochastic systems with Markovian jumping parameters

    Get PDF
    Copyright [2002] IEEE. This material is posted here with permission of the IEEE. Such permission of the IEEE does not in any way imply IEEE endorsement of any of Brunel University's products or services. Internal or personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution must be obtained from the IEEE by writing to [email protected]. By choosing to view this document, you agree to all provisions of the copyright laws protecting it.In this paper, we investigate the stochastic stabilization problem for a class of bilinear continuous time-delay uncertain systems with Markovian jumping parameters. Specifically, the stochastic bilinear jump system under study involves unknown state time-delay, parameter uncertainties, and unknown nonlinear deterministic disturbances. The jumping parameters considered here form a continuous-time discrete-state homogeneous Markov process. The whole system may be regarded as a stochastic bilinear hybrid system that includes both time-evolving and event-driven mechanisms. Our attention is focused on the design of a robust state-feedback controller such that, for all admissible uncertainties as well as nonlinear disturbances, the closed-loop system is stochastically exponentially stable in the mean square, independent of the time delay. Sufficient conditions are established to guarantee the existence of desired robust controllers, which are given in terms of the solutions to a set of either linear matrix inequalities (LMIs), or coupled quadratic matrix inequalities. The developed theory is illustrated by numerical simulatio

    Stabilisation of stochastic differential equations with Markovian switching by feedback control based on discrete-time state observation with a time delay

    Get PDF
    Feedback control based on discrete-time state observation for stochastic differential equations with Markovian switching was initialized by Mao (2013). In practice, various effects could cause some time delay in the control function. Therefore, the time delay is taken into account for the discrete-time state observation in this letter and the mean-square exponential stability of the controlled system is investigated. This letter is devoted as a continuous research to Mao (2013)
    • …
    corecore