1,450 research outputs found

    Stabilisation of hybrid stochastic differential equations by delay feedback control

    Get PDF
    This paper is concerned with the exponential mean-square stabilisation of hybrid stochastic differential equations (also known as stochastic dierential equations with Markovian switching) by delay feedback controls. Although the stabilisation by non-delay feedback controls for such equations has been discussed by several authors, there is so far little on the stabilisation by delay feedback controls and our aim here is mainly to close the gap. To make our theory more understandable as well as to avoid complicated notations, we will restrict our underlying hybrid stochastic dierential equations to a relatively simple form. However our theory can certainly be developed to cope with much more general equations without any diculty

    Almost Sure Stabilization for Adaptive Controls of Regime-switching LQ Systems with A Hidden Markov Chain

    Full text link
    This work is devoted to the almost sure stabilization of adaptive control systems that involve an unknown Markov chain. The control system displays continuous dynamics represented by differential equations and discrete events given by a hidden Markov chain. Different from previous work on stabilization of adaptive controlled systems with a hidden Markov chain, where average criteria were considered, this work focuses on the almost sure stabilization or sample path stabilization of the underlying processes. Under simple conditions, it is shown that as long as the feedback controls have linear growth in the continuous component, the resulting process is regular. Moreover, by appropriate choice of the Lyapunov functions, it is shown that the adaptive system is stabilizable almost surely. As a by-product, it is also established that the controlled process is positive recurrent

    Almost sure exponential stabilization by discrete-time stochastic feedback control

    Get PDF
    Given an unstable linear scalar differential equation x˙ (t) = αx(t) (α > 0), we will show that the discrete-time stochastic feedback control σx([t/τ ]τ )dB(t) can stabilize it. That is, we will show that the stochastically controlled system dx(t) = αx(t)dt +σx([t/τ ]τ )dB(t) is almost surely exponentially stable when σ2 > 2α and τ > 0 is sufficiently small, where B(t) is a Brownian motion and [t/τ ] is the integer part of t/τ . We will also discuss the nonlinear stabilization problem by a discrete- time stochastic feedback control. The reason why we consider the discrete-time stochastic feedback control is because that the state of the given system is in fact observed only at discrete times, say 0, τ, 2τ, ‱ ‱ ‱ , for example, where τ > 0 is the duration between two consecutive observations. Accordingly, the stochastic feedback control should be designed based on these discrete-time observations, namely the stochastic feedback control should be of the form σx([t/τ ]τ )dB(t). From the point of control cost, it is cheaper if one only needs to observe the state less frequently. It is therefore useful to give a bound on τ from below as larger as better

    Feedback control of quantum state reduction

    Get PDF
    Feedback control of quantum mechanical systems must take into account the probabilistic nature of quantum measurement. We formulate quantum feedback control as a problem of stochastic nonlinear control by considering separately a quantum filtering problem and a state feedback control problem for the filter. We explore the use of stochastic Lyapunov techniques for the design of feedback controllers for quantum spin systems and demonstrate the possibility of stabilizing one outcome of a quantum measurement with unit probability

    Path-wise control of stochastic systems: overcoming the curse of causality

    Get PDF
    In this thesis we address the topic of path-wise control of stochastic systems defined by stochastic differential equations. By path-wise control we mean that the controller's decisions are not intended to regulate the moments of the state or the output (or a function of them), as customary in stochastic control. Instead, we aim at designing a controller that achieves a desired, specific, trajectory of the state (or the output) itself, for all possible realisations of the noise affecting the system. We show that path-wise control is cursed by insuperable causality issues, because in order to perfectly attain a predefined trajectory for each realisation of the noise, the controller needs to access measurements of the noise itself, which is not possible in practice. Therefore, we approach path-wise control in two steps. Firstly, we design idealistic controllers, which achieve exact regulation by employing a feedback of the noise. Although unrealistic, these designs are preliminary to the second step, i.e. the construction of practical controllers, which estimate the noise from measurements of available quantities (state or output) and use such estimates to perform approximate path-wise control in a hybrid way. We show that the performance of the practical controllers can retrieve the idealistic ones in a limit behaviour. In this framework we address two classical control problems. Firstly, we consider output regulation of linear stochastic systems. We show that the idealistic controllers achieve a zero steady-state tracking error, while the practical controllers allow for a nonzero steady-state error, which, however, can be made arbitrarily small by tuning a design parameter. Secondly, we consider the control of stochastic systems defined by nonlinear, control-affine, stochastic differential equations. In this case, we show that the idealistic controllers achieve exact feedback linearisation and output tracking, while the practical controllers achieve state (and output) trajectories which can be made close to the idealistic ones by tuning a design parameter, thus obtaining approximate feedback linearisation and tracking.Open Acces

    Properties Of Nonlinear Randomly Switching Dynamic Systems: Mean-Field Models And Feedback Controls For Stabilization

    Get PDF
    This dissertation concerns the properties of nonlinear dynamic systems hybrid with Markov switching. It contains two parts. The first part focus on the mean-field models with state-dependent regime switching, and the second part focus on the system regularization and stabilization using feedback control. Throughout this dissertation, Markov switching processes are used to describe the randomness caused by discrete events, like sudden environment change or other uncertainty. In Chapter 2, the mean-field models we studied are formulated by nonlinear stochastic differential equations hybrid with state-dependent regime switching. It originates from the phase transition problem in statistical physics. The mean-field term is used to describe the complex interactions between multi-bodies in the system, and acts as an mean reversing effects. We studied the basic properties of such models, including regularity, non-negativity, finite moments, existence of moment generating functions, continuity of sample path, positive recurrence, long-time behavior. We also proved that when switching process changes much more frequently, the two-time-scale limit exists. In Chapter 3 and Chapter 4, we consider the feedback control for stabilization of nonlinear dynamic systems. Chapter 3 focus on nonlinear deterministic systems with switching. Many nonlinear systems would explode in finite time. We found that Brownian motion noise can be used as feedback control to stabilize such systems. To do so, we can use one nonlinear feedback noise term to suppress the explosion, and then use another linear feedback noise term to stabilize the system to the equilibrium point 0. Since it is almost impossible to get an closed-form solutions, the discrete-time approximation algorithm is constructed. The interpolated sequence of the discrete-time algorithm is proved to converge to the switching diffusion process, and then the regularity and stability results of the approximating sequence are derived. In Chapter 4, we study the nonlinear stochastic systems with switching. Use the similar methods, we can prove that well designed noise type feedback control could also regularize and stabilize nonlinear switching diffusions. Examples are used to demonstrate the results

    Stability equivalence between the stochastic dierential delay equations driven by G-Brownian motion and the Euler-Maruyama method

    Get PDF
    Consider a stochastic differential delay equation driven by G-Brownian motion (G-SDDE) dx(t) = f(x(t), x(t − τ))dt + g(x(t), x(t − τ))dB(t) + h(x(t), x(t − τ))dhBi(t). Under the global Lipschitz condition for the G-SDDE, we show that the G-SDDE is exponentially stable in mean square if and only if for sufficiently small step size, the Euler-Maruyama (EM) method is exponentially stable in mean square. Thus, we can carry out careful numerical simulations to investigate the exponential stability of the underlying G-SDDE in practice, in the absence of an appropriate Lyapunov function. A numerical example is provided to illustrate our results
    • 

    corecore