57 research outputs found

    The stability of conditional Markov processes and Markov chains in random environments

    Full text link
    We consider a discrete time hidden Markov model where the signal is a stationary Markov chain. When conditioned on the observations, the signal is a Markov chain in a random environment under the conditional measure. It is shown that this conditional signal is weakly ergodic when the signal is ergodic and the observations are nondegenerate. This permits a delicate exchange of the intersection and supremum of σ\sigma-fields, which is key for the stability of the nonlinear filter and partially resolves a long-standing gap in the proof of a result of Kunita [J. Multivariate Anal. 1 (1971) 365--393]. A similar result is obtained also in the continuous time setting. The proofs are based on an ergodic theorem for Markov chains in random environments in a general state space.Comment: Published in at http://dx.doi.org/10.1214/08-AOP448 the Annals of Probability (http://www.imstat.org/aop/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Maximization of the portfolio growth rate under fixed and proportional transaction costs

    Get PDF
    This paper considers a discrete-time Markovian model of asset prices with economic factors and transaction costs with proportional and fixed terms. Existence of optimal strategies maximizing average growth rate of portfolio is proved in the case of complete and partial observation of the process modelling the economic factors. The proof is based on a modification of the vanishing discount approach. The main difficulty is the discontinuity of the controlled transition operator of the underlying Markov process

    Discrete-time controlled markov processes with average cost criterion: a survey

    Get PDF
    This work is a survey of the average cost control problem for discrete-time Markov processes. The authors have attempted to put together a comprehensive account of the considerable research on this problem over the past three decades. The exposition ranges from finite to Borel state and action spaces and includes a variety of methodologies to find and characterize optimal policies. The authors have included a brief historical perspective of the research efforts in this area and have compiled a substantial yet not exhaustive bibliography. The authors have also identified several important questions that are still open to investigation

    Geometric ergodicity in a weighted sobolev space

    Get PDF
    For a discrete-time Markov chain {X(t)}\{X(t)\} evolving on \Re^\ell with transition kernel PP, natural, general conditions are developed under which the following are established: 1. The transition kernel PP has a purely discrete spectrum, when viewed as a linear operator on a weighted Sobolev space Lv,1L_\infty^{v,1} of functions with norm, fv,1=supx1v(x)max{f(x),1f(x),,f(x)}, \|f\|_{v,1} = \sup_{x \in \Re^\ell} \frac{1}{v(x)} \max \{|f(x)|, |\partial_1 f(x)|,\ldots,|\partial_\ell f(x)|\}, where v ⁣:[1,)v\colon \Re^\ell \to [1,\infty) is a Lyapunov function and i:=/xi\partial_i:=\partial/\partial x_i. 2. The Markov chain is geometrically ergodic in Lv,1L_\infty^{v,1}: There is a unique invariant probability measure π\pi and constants B<B<\infty and δ>0\delta>0 such that, for each fLv,1f\in L_\infty^{v,1}, any initial condition X(0)=xX(0)=x, and all t0t\geq 0: Ex[f(X(t))]π(f)Beδtv(x),Ex[f(X(t))]2Beδtv(x),\Big| \text{E}_x[f(X(t))] - \pi(f)\Big| \le Be^{-\delta t}v(x),\quad \|\nabla \text{E}_x[f(X(t))] \|_2 \le Be^{-\delta t} v(x), where π(f)=fdπ\pi(f)=\int fd\pi. 3. For any function fLv,1f\in L_\infty^{v,1} there is a function hLv,1h\in L_\infty^{v,1} solving Poisson's equation: hPh=fπ(f). h-Ph = f-\pi(f). Part of the analysis is based on an operator-theoretic treatment of the sensitivity process that appears in the theory of Lyapunov exponents

    Large deviations for some fast stochastic volatility models by viscosity methods

    Full text link
    We consider the short time behaviour of stochastic systems affected by a stochastic volatility evolving at a faster time scale. We study the asymptotics of a logarithmic functional of the process by methods of the theory of homogenisation and singular perturbations for fully nonlinear PDEs. We point out three regimes depending on how fast the volatility oscillates relative to the horizon length. We prove a large deviation principle for each regime and apply it to the asymptotics of option prices near maturity

    Optimal sequential vector quantization of Markov sources

    Get PDF
    Includes bibliographical references (p. 30-31).Supported by U.S. Army grant. PAAL03-92-G-0115 Supported by a Homi Bhabha Fellowship and the Center for Intelligent Control Systems.V.S. Borkar, Sanjoy K. Mitter, Sekhar Tatikonda

    Stock Market Volatility and Learning

    Full text link
    We study a standard consumption based asset pricing model with rational investors who entertain subjective prior beliefs about price behavior. Optimal behavior then dictates that investors learn about price behavior from past price observations. We show that this imparts momentum and mean reversion into the equilibrium behavior of the price dividend ratio, similar to what can be observed in the data. Estimating the model on U.S. stock price data using the method of simulated moments, we show that it can quantitatively account for the observed stock price volatility, the persistence of the price-dividend ratio, and the predictability of long-horizon returns. For reasonable degrees of risk aversion, the model also passes a formal statistical test for the overall goodness of fit, provided one excludes the equity premium from the set of moments to be matched
    corecore