290 research outputs found

    Asymptotic Expansions for Stationary Distributions of Perturbed Semi-Markov Processes

    Full text link
    New algorithms for computing of asymptotic expansions for stationary distributions of nonlinearly perturbed semi-Markov processes are presented. The algorithms are based on special techniques of sequential phase space reduction, which can be applied to processes with asymptotically coupled and uncoupled finite phase spaces.Comment: 83 page

    Properties Of Nonlinear Randomly Switching Dynamic Systems: Mean-Field Models And Feedback Controls For Stabilization

    Get PDF
    This dissertation concerns the properties of nonlinear dynamic systems hybrid with Markov switching. It contains two parts. The first part focus on the mean-field models with state-dependent regime switching, and the second part focus on the system regularization and stabilization using feedback control. Throughout this dissertation, Markov switching processes are used to describe the randomness caused by discrete events, like sudden environment change or other uncertainty. In Chapter 2, the mean-field models we studied are formulated by nonlinear stochastic differential equations hybrid with state-dependent regime switching. It originates from the phase transition problem in statistical physics. The mean-field term is used to describe the complex interactions between multi-bodies in the system, and acts as an mean reversing effects. We studied the basic properties of such models, including regularity, non-negativity, finite moments, existence of moment generating functions, continuity of sample path, positive recurrence, long-time behavior. We also proved that when switching process changes much more frequently, the two-time-scale limit exists. In Chapter 3 and Chapter 4, we consider the feedback control for stabilization of nonlinear dynamic systems. Chapter 3 focus on nonlinear deterministic systems with switching. Many nonlinear systems would explode in finite time. We found that Brownian motion noise can be used as feedback control to stabilize such systems. To do so, we can use one nonlinear feedback noise term to suppress the explosion, and then use another linear feedback noise term to stabilize the system to the equilibrium point 0. Since it is almost impossible to get an closed-form solutions, the discrete-time approximation algorithm is constructed. The interpolated sequence of the discrete-time algorithm is proved to converge to the switching diffusion process, and then the regularity and stability results of the approximating sequence are derived. In Chapter 4, we study the nonlinear stochastic systems with switching. Use the similar methods, we can prove that well designed noise type feedback control could also regularize and stabilize nonlinear switching diffusions. Examples are used to demonstrate the results

    Sequences Of Random Matrices Modulated By A Discrete-Time Markov Chain

    Get PDF
    In this dissertation, we consider a number of matrix-valued random sequences that are modulated by a discrete-time Markov chain having a finite space.Assuming that the state space of the Markov chain is large, our main effort in this work is devoted to reducing the complexity. To achieve this goal, our formulation uses time-scale separation of the Markov chain. The state-space of the Markov chain is split into subspaces. Next, the states of the Markov chain in each subspace are aggregated into a ``super\u27\u27 state. Then we normalize the matrix-valued sequences that are modulated by the two-time-scale Markov chain. Under simple conditions, we derive a scaling limit of the centered and scaled sequence by using a martingale averaging approach. The limit is considered through a functional. It is shown that the scaled and interpolated sequence converges weakly to a switching diffusion. Towards the end of the work, we also indicate how we may handle matrix-valued processes directly. Certain tail probability estimates are obtained

    Reaction Networks and Population Dynamics

    Get PDF
    Reaction systems and population dynamics constitute two highly developed areas of research that build on well-defined model classes, both in terms of dynamical systems and stochastic processes. Despite a significant core of common structures, the two fields have largely led separate lives. The workshop brought the communities together and emphasised concepts, methods and results that have, so far, appeared in one area but are potentially useful in the other as well

    Twentieth conference on stochastic processes and their applications

    Get PDF

    Switching Diffusion Systems With Past-Dependent Switching Having A Countable State Space

    Get PDF
    Emerging and existing applications in wireless communications, queueing networks, biological models, financial engineering, and social networks demand the mathematical modeling and analysis of hybrid models in which continuous dynamics and discrete events coexist. Assuming that the systems are in continuous times, stemming from stochastic-differential-equation-based models and random discrete events, switching diffusions come into being. In such systems, continuous states and discrete events (discrete states) coexist and interact. A switching diffusion is a two-component process (X(t),α(t))(X(t),\alpha(t)), a continuous component and a discrete component taking values in a discrete set (a set consisting of isolated points). When the discrete component takes a value ii (i.e., α(t)=i\alpha(t)=i), the continuous component X(t)X(t) evolves according to the diffusion process whose drift and diffusion coefficients depend on ii. Until very recently, in most of the literature α(t)\alpha(t) was assumed to be a process taking values in a finite set, and that the switching rates of α(t)\alpha(t) are either independent or depend only on the current state of X(t)X(t). To be able to treat more realistic models and to broaden the applicability, this dissertation undertakes the task of investigating the dynamics of (X(t),α(t))(X(t),\alpha(t)) in a much more general setting in which α(t)\alpha(t) has a countable state space and its switching intensities depend on the history of the continuous component X(t)X(t). We systematically established important properties of this system: well-posedness, the Markov Feller property, and the recurrence and ergodicity of the associated function-valued process. We have also studied several types of stability for the system

    Control of singularly perturbed hybrid stochastic systems

    Get PDF
    In this paper, we study a class of optimal stochastic control problems involving two different time scales. The fast mode of the system is represented by deterministic state equations whereas the slow mode of the system corresponds to a jump disturbance process. Under a fundamental “ergodicity” property for a class of “infinitesimal control systems” associated with the fast mode, we show that there exists a limit problem which provides a good approximation to the optimal control of the perturbed system. Both the finite- and infinite-discounted horizon cases are considered. We show how an approximate optimal control law can be constructed from the solution of the limit control problem. In the particular case where the infinitesimal control systems possess the so-called turnpike property, i.e., are characterized by the existence of global attractors, the limit control problem can be given an interpretation related to a decomposition approach

    Markov and Semi-markov Chains, Processes, Systems and Emerging Related Fields

    Get PDF
    This book covers a broad range of research results in the field of Markov and Semi-Markov chains, processes, systems and related emerging fields. The authors of the included research papers are well-known researchers in their field. The book presents the state-of-the-art and ideas for further research for theorists in the fields. Nonetheless, it also provides straightforwardly applicable results for diverse areas of practitioners
    corecore