2,521 research outputs found

    Stabilizing Randomly Switched Systems

    Full text link
    This article is concerned with stability analysis and stabilization of randomly switched systems under a class of switching signals. The switching signal is modeled as a jump stochastic (not necessarily Markovian) process independent of the system state; it selects, at each instant of time, the active subsystem from a family of systems. Sufficient conditions for stochastic stability (almost sure, in the mean, and in probability) of the switched system are established when the subsystems do not possess control inputs, and not every subsystem is required to be stable. These conditions are employed to design stabilizing feedback controllers when the subsystems are affine in control. The analysis is carried out with the aid of multiple Lyapunov-like functions, and the analysis results together with universal formulae for feedback stabilization of nonlinear systems constitute our primary tools for control designComment: 22 pages. Submitte

    On the gap between deterministic and probabilistic joint spectral radii for discrete-time linear systems

    Get PDF
    Given a discrete-time linear switched system Σ(A)\Sigma(\mathcal A) associated with a finite set A\mathcal A of matrices, we consider the measures of its asymptotic behavior given by, on the one hand, its deterministic joint spectral radius ρd(A)\rho_{\mathrm d}(\mathcal A) and, on the other hand, its probabilistic joint spectral radii ρp(ν,P,A)\rho_{\mathrm p}(\nu,P,\mathcal A) for Markov random switching signals with transition matrix PP and a corresponding invariant probability ν\nu. Note that ρd(A)\rho_{\mathrm d}(\mathcal A) is larger than or equal to ρp(ν,P,A)\rho_{\mathrm p}(\nu,P,\mathcal A) for every pair (ν,P)(\nu, P). In this paper, we investigate the cases of equality of ρd(A)\rho_{\mathrm d}(\mathcal A) with either a single ρp(ν,P,A)\rho_{\mathrm p}(\nu,P,\mathcal A) or with the supremum of ρp(ν,P,A)\rho_{\mathrm p}(\nu,P,\mathcal A) over (ν,P)(\nu,P) and we aim at characterizing the sets A\mathcal A for which such equalities may occur

    Mean square stabilization of discrete-time switching Markov jump linear systems

    Get PDF
    This paper consider a special class of hybrid system called switching Markov jump linear system. The system transition is governed by two rules. One is Markov chain and the other is a deterministic rule. Furthermore, the transition probability of the Markov chain is not only piecewise but also orchestrated by a deterministic switching rule. In this paper the mean square stability of the systems is studied when the deterministic switching is subject to two different dwell time conditions: having a lower bound and having both lower and high bounds. The main contributions of this paper are two relevant stability theorems for the systems under study. A numerical example is provided to demonstrate the theoretical results

    Almost sure stability of discrete-time Markov Jump Linear Systems

    Get PDF
    This paper deals with transient analysis and almost sure stability for discrete-time Markov Jump Linear System (MJLS). The expectation of sojourn time and activation number of any mode, and switching number between any two modes of discrete-time MJLS are presented firstly. Then a result on transient behavior analysis of discrete-time MJLS is given. Finally a new deterministically testable condition for the exponential almost sure stability of discrete-time MJLS is proposed

    Almost Sure Stabilization for Adaptive Controls of Regime-switching LQ Systems with A Hidden Markov Chain

    Full text link
    This work is devoted to the almost sure stabilization of adaptive control systems that involve an unknown Markov chain. The control system displays continuous dynamics represented by differential equations and discrete events given by a hidden Markov chain. Different from previous work on stabilization of adaptive controlled systems with a hidden Markov chain, where average criteria were considered, this work focuses on the almost sure stabilization or sample path stabilization of the underlying processes. Under simple conditions, it is shown that as long as the feedback controls have linear growth in the continuous component, the resulting process is regular. Moreover, by appropriate choice of the Lyapunov functions, it is shown that the adaptive system is stabilizable almost surely. As a by-product, it is also established that the controlled process is positive recurrent

    On stability of randomly switched nonlinear systems

    Full text link
    This article is concerned with stability analysis and stabilization of randomly switched nonlinear systems. These systems may be regarded as piecewise deterministic stochastic systems: the discrete switches are triggered by a stochastic process which is independent of the state of the system, and between two consecutive switching instants the dynamics are deterministic. Our results provide sufficient conditions for almost sure global asymptotic stability using Lyapunov-based methods when individual subsystems are stable and a certain ``slow switching'' condition holds. This slow switching condition takes the form of an asymptotic upper bound on the probability mass function of the number of switches that occur between the initial and current time instants. This condition is shown to hold for switching signals coming from the states of finite-dimensional continuous-time Markov chains; our results therefore hold for Markov jump systems in particular. For systems with control inputs we provide explicit control schemes for feedback stabilization using the universal formula for stabilization of nonlinear systems.Comment: 13 pages, no figures. A slightly modified version is scheduled to appear in IEEE Transactions on Automatic Control, Dec 200
    corecore