290 research outputs found
Asymptotic Expansions for Stationary Distributions of Perturbed Semi-Markov Processes
New algorithms for computing of asymptotic expansions for stationary
distributions of nonlinearly perturbed semi-Markov processes are presented. The
algorithms are based on special techniques of sequential phase space reduction,
which can be applied to processes with asymptotically coupled and uncoupled
finite phase spaces.Comment: 83 page
Properties Of Nonlinear Randomly Switching Dynamic Systems: Mean-Field Models And Feedback Controls For Stabilization
This dissertation concerns the properties of nonlinear dynamic systems hybrid with Markov switching. It contains two parts. The first part focus on the mean-field models with state-dependent regime switching, and the second part focus on the system regularization and stabilization using feedback control. Throughout this dissertation, Markov switching processes are used to describe the randomness caused by discrete events, like sudden environment change or other uncertainty.
In Chapter 2, the mean-field models we studied are formulated by nonlinear stochastic differential equations hybrid with state-dependent regime switching. It originates from the phase transition problem in statistical physics. The mean-field term is used to describe the complex interactions between multi-bodies in the system, and acts as an mean reversing effects. We studied the basic properties of such models, including regularity, non-negativity, finite moments, existence of moment generating functions, continuity of sample path, positive recurrence, long-time behavior. We also proved that when switching process changes much more frequently, the two-time-scale limit exists.
In Chapter 3 and Chapter 4, we consider the feedback control for stabilization of nonlinear dynamic systems. Chapter 3 focus on nonlinear deterministic systems with switching. Many nonlinear systems would explode in finite time. We found that Brownian motion noise can be used as feedback control to stabilize such systems. To do so, we can use one nonlinear feedback noise term to suppress the explosion, and then use another linear feedback noise term to stabilize the system to the equilibrium point 0. Since it is almost impossible to get an closed-form solutions, the discrete-time approximation algorithm is constructed. The interpolated sequence of the discrete-time algorithm is proved to converge to the switching diffusion process, and then the regularity and stability results of the approximating sequence are derived. In Chapter 4, we study the nonlinear stochastic systems with switching. Use the similar methods, we can prove that well designed noise type feedback control could also regularize and stabilize nonlinear switching diffusions. Examples are used to demonstrate the results
Sequences Of Random Matrices Modulated By A Discrete-Time Markov Chain
In this dissertation, we consider a number of matrix-valued random sequences that are modulated by a discrete-time Markov chain having a finite space.Assuming that the state space of the Markov chain is large, our main effort in this work is devoted to reducing the complexity. To achieve this goal, our formulation uses time-scale separation of the Markov chain. The state-space of the Markov chain is split into subspaces. Next, the states of the Markov chain in each subspace are aggregated into a ``super\u27\u27 state. Then we normalize the matrix-valued sequences that are modulated by the two-time-scale Markov chain. Under simple conditions, we derive a scaling limit of the centered and scaled sequence by using a martingale averaging approach. The limit is considered through a functional. It is shown that the scaled and interpolated sequence converges weakly to a switching diffusion. Towards the end of the work, we also indicate how we may handle matrix-valued processes directly. Certain tail probability estimates are obtained
Reaction Networks and Population Dynamics
Reaction systems and population dynamics constitute two highly developed areas of research that build on well-defined model classes, both in terms of dynamical systems and stochastic processes. Despite a significant core of common structures, the two fields have largely led separate lives. The workshop brought the communities together and emphasised concepts, methods and results that have, so far, appeared in one area but are potentially useful in the other as well
Switching Diffusion Systems With Past-Dependent Switching Having A Countable State Space
Emerging and existing applications in wireless communications, queueing networks, biological models, financial engineering, and social networks demand the
mathematical modeling and analysis of hybrid models in which continuous dynamics and discrete events coexist.
Assuming that the systems are in continuous times,
stemming from stochastic-differential-equation-based models and random discrete events,
switching diffusions come into being. In such systems, continuous states and discrete events
(discrete states)
coexist and interact.
A switching diffusion is a two-component process , a continuous component and a discrete component taking values in a discrete set (a set consisting of isolated points).
When the discrete component takes a value (i.e., ),
the continuous component evolves according to the diffusion process whose drift and diffusion coefficients depend on .
Until very recently, in most of
the literature
was assumed to be a process taking values in a finite set,
and that the switching rates of are either independent or depend only
on the current state of .
To be able to treat more realistic models and to broaden the applicability,
this dissertation undertakes the task of
investigating the dynamics of
in a much more general setting in which has a countable state space
and its switching intensities depend on the history of the continuous component .
We systematically established
important properties of this system: well-posedness,
the Markov Feller property, and the recurrence and ergodicity
of the associated function-valued process.
We have also studied several types of stability for the system
Control of singularly perturbed hybrid stochastic systems
In this paper, we study a class of optimal stochastic
control problems involving two different time scales. The fast
mode of the system is represented by deterministic state equations
whereas the slow mode of the system corresponds to a jump disturbance
process. Under a fundamental “ergodicity” property for
a class of “infinitesimal control systems” associated with the fast
mode, we show that there exists a limit problem which provides
a good approximation to the optimal control of the perturbed
system. Both the finite- and infinite-discounted horizon cases are
considered. We show how an approximate optimal control law
can be constructed from the solution of the limit control problem.
In the particular case where the infinitesimal control systems
possess the so-called turnpike property, i.e., are characterized by
the existence of global attractors, the limit control problem can be
given an interpretation related to a decomposition approach
Markov and Semi-markov Chains, Processes, Systems and Emerging Related Fields
This book covers a broad range of research results in the field of Markov and Semi-Markov chains, processes, systems and related emerging fields. The authors of the included research papers are well-known researchers in their field. The book presents the state-of-the-art and ideas for further research for theorists in the fields. Nonetheless, it also provides straightforwardly applicable results for diverse areas of practitioners
Recommended from our members
Exploring Probability Measures with Markov Processes
In many domains where mathematical modelling is applied, a deterministic description of the system at hand is insufficient, and so it is useful to model systems as being in some way stochastic. This is often achieved by modeling the state of the system as being drawn from a probability measure, which is usually given algebraically, i.e. as a formula. While this representation can be useful for deriving certain characteristics of the system, it is by now well-appreciated that many questions about stochastic systems are best-answered by looking at samples from the associated probability measure. In this thesis, we seek to develop and analyse efficient techniques for generating samples from a given probability measure, with a focus on algorithms which simulate a Markov process with the desired invariant measure.
The first work presented in this thesis considers the use of Piecewise-Deterministic Markov Processes (PDMPs) for generating samples. In contrast to usual approaches, PDMPs are i) defined as continuous-time processes, and ii) are typically non-reversible with respect to their invariant measure. These distinctions pose computational and theoretical challenges for the design, analysis, and implementation of PDMP-based samplers. The key contribution of this work is to develop a transparent characterisation of how one can construct a PDMP (within the class of trajectorially-reversible processes) which admits the desired invariant measure, and to offer actionable recommendations on how these processes should be designed in practice.
The second work presented in this thesis considers the task of sampling from a probability measure on a discrete space. While work in recent years has made it possible to apply sampling algorithms to probability measures with differentiable densities on continuous spaces in a reasonably generic way, samplers on discrete spaces are still largely derived on a case-by-case basis. The contention of this work is that this is not necessary, and that one can in fact define quite generally-applicable algorithms which can sample efficiently from discrete probability measures. The contributions are then to propose a small collection of algorithms for this task, and verify their efficiency empirically. Building
on the previous chapter’s work, our samplers are again defined in continuous time and non-reversible, each of which offer noticeable benefits in efficiency.
The third work presented in this thesis concerns a theoretical study of a particular class of Markov Chain-based sampling algorithms which make use of parallel computing resources. The Markov Chains which are produced by this algorithm are mathematically equivalent to a standard Metropolis-Hastings chain, but their real-time convergence properties are affected nontrivially by the application of parallelism. The contribution of this work is to analyse the convergence behaviour of these chains, and to use the ‘optimal scaling’ framework (as developed by Roberts, Rosenthal, and others) to make recommendations concerning the tuning of such algorithms in practice.
The introductory chapters provide a general overview on the task of generating samples from a probability measure, with particular focus on methods involving Markov processes. There is also an interlude on the relative benefits of i) continuous-time and ii) non-reversible Markov processes for sampling, which are intended to provide additional context for the reading of the first two works.PhD Studentship paid for by Cantab Capital Institute for the Mathematics of Informatio
- …