643 research outputs found

    Optimal control of risk process in a regime-switching environment

    Full text link
    This paper is concerned with cost optimization of an insurance company. The surplus of the insurance company is modeled by a controlled regime switching diffusion, where the regime switching mechanism provides the fluctuations of the random environment. The goal is to find an optimal control that minimizes the total cost up to a stochastic exit time. A weaker sufficient condition than that of (Fleming and Soner 2006, Section V.2) for the continuity of the value function is obtained. Further, the value function is shown to be a viscosity solution of a Hamilton-Jacobian-Bellman equation.Comment: Keywords: Regime switching diffusion, continuity of the value function, exit time control, viscosity solutio

    On the regularity of American options with regime-switching uncertainty

    Get PDF
    We study the regularity of the stochastic representation of the solution of a class of initial-boundary value problems related to a regime-switching diffusion. This representation is related to the value function of a finite-horizon optimal stopping problem such as the price of an American-style option in finance. We show continuity and smoothness of the value function using coupling and time-change techniques. As an application, we find the minimal payoff scenario for the holder of an American-style option in the presence of regime-switching uncertainty under the assumption that the transition rates are known to lie within level-dependent compact sets.Comment: 22 pages, to appear in Stochastic Processes and their Application

    Mixture dynamics and regime switching diffusions with application to option pricing

    Get PDF
    In this paper we present a class of regime switching diffusion models described by a pair (X(t),Y(t)) ∈ Rn × S, S = {1, 2, . . . , N}, Y(t) being a Markov chain, for which the marginal probability of the diffusive component X(t) is a given mixture. Our main motivation is to extend to a multivariate setting the class of mixture models proposed by Brigo and Mercurio in a series of papers. Furthermore, a simple algorithm is available for simulating paths through a thinning mechanism. The application to option pricing is considered by proposing a mixture version for theMargrabe Option formula and the Heston stochastic volatility formula for a plain vanilla

    Hybrid Stochastic Systems: Numerical Methods, Limit Results, And Controls

    Get PDF
    This dissertation is concerned with the so-called stochastic hybrid systems, which are featured by the coexistence of continuous dynamics and discrete events and their interactions. Such systems have drawn much needed attentions in recent years. One of the main reasons is that such systems can be used to better reflect the reality for a wide range of applications in networked systems, communication systems, economic systems, cyber-physical systems, and biological and ecological systems, among others. Our main interest is centered around one class of such hybrid systems known as switching diffusions. In such a system, in addition to the driving force of a Brownian motion as in a stochastic system represented by a stochastic differential equation (SDE), there is an additional continuous-time switching process that models the environmental changes due to random events. In the first part, we develops numerical schemes for stochastic differential equations with Markovian switching (Markovian switching SDEs). By utilizing a special form of It^o\u27s formula for switching SDEs and special structural of the jumps of the switching component we derived a new scheme to simulate switching SDEs in the spirit of Milstein\u27s scheme for purely SDEs. We also develop a new approach to establish the convergence of the proposed algorithm that incorporates martingale methods, quadratic variations, and Markovian stopping times. Detailed and delicate analysis is carried out. Under suitable conditions which are natural extensions of the classical ones, the convergence of the algorithms is established. The rate of convergence is also ascertained. The second part is concerned with a limit theorem for general stochastic differential equations with Markovian regime switching. Given a sequence of stochastic regime switching systems where the discrete switching processes are independent of the state of the systems. The continuous-state component of these systems are governed by stochastic differential equations with driving processes that are continuous increasing processes and square integrable martingales. We establish the convergence of the sequence of systems to the one described by a state independent regime-switching diffusion process when the two driving processes converge to the usual time process and the Brownian motion in suitable sense. The third part is concerned with controlled hybrid systems that are good approximations to controlled switching diffusion processes. In lieu of a Brownian motion noise, we use a wide-band noise formulation, which facilitates the treatment of non-Markovian models. The wide-band noise is one whose spectrum has band width wide enough. We work with a basic stationary mixing type process. On top of this wide-band noise process, we allow the system to be subject to random discrete event influence. The discrete event process is a continuous time Markov chain with a finite state space. Although the state space is finite, we assume that the state space is rather large and the Markov chain is irreducible. Using a two-time-scale formulation and assuming the Markov chain also subjects to fast variations, using weak convergence and singular perturbation test function method we first proved that the when controlled by nearly optimal and equilibrium controls, the state and the corresponding costs of the original systems would converge to those of controlled diffusions systems. Using the limit controlled dynamic system as a guidance, we construct controls for the original problem and show that the controls so constructed are near optimal and nearly equilibrium

    Stability And Controls For Stochastic Dynamic Systems

    Get PDF
    This dissertation focuses on stability analysis and optimal controls for stochastic dynamic systems. It encompasses two parts. One part of our work gives an in-depth study of stability of linear jump diffusion, linear Markovian jump diffusion, multi-dimensional jump diffusion and regime-switching jump diffusion together with the associated numerical solutions. The other part of our work is controls for stochastic dynamic systems, to be specific, we concentrated on mean variance types of control under different formulations. We obtained the nearly optimal mean-variance controls under both two-time-scale and hidden Markov chain formulations and convergence for each case is achieved. In Chapter 2, stability analysis of benchmark linear scalar jump diffusion is studied first. We presented the conditions for exponential p stable and almost surely exponentially stable for SDE and that of numerical solutions. Note that due to the use of Poisson processes, using asymptotic expansions as in the usual approach of treating diffusion processes does not work any longer. Different from the existing treatments of Euler-Maurayama methods for solutions of stochastic differential equations, techniques from stochastic approximation is employed in our work. The similar analysis is carried out for Markov jump diffusion and multi-dimensional jump diffusion. Beside of these, we have a thorough study on regime switching jump diffusion in which asymptotic stability in the large and exponential p-stability are carried out. Connection between almost surely exponential stability and exponential p-stability is exploited. Necessary conditions for exponential p-stability are derived and criteria for asymptotic stability in distribution are provided. In Chapter 3 We work on the famous mean variance problem in which a switching process ( say, market regime) is embedded. We first use a two-time-scale formulation to treat the underlying systems, which is represented by usage of a small parameter. As the small parameter goes to 0, we obtain a limit problem. Using the limit problem as a guide, we construct controls for the original problem, and show that the control so constructed is nearly optimal. In chapter 4, we revisited the mean variance control problem in which the switching process is a hidden Markov chain. Instead of having full knowledge of switching process, we assume a noisy observation of switching process corrupted by white noise is available, we focus on minimizing the variance subject to a mixed terminal expectation. Using the Wonham filter, we convert the partially observable system to a completely observable one first. Because closed form solutions are virtually impossible to obtain, our main effort is devoted to designing a numerical algorithm. Convergence of the algorithm is obtained
    • …
    corecore