8,018 research outputs found

    Fast MCMC sampling for Markov jump processes and extensions

    Full text link
    Markov jump processes (or continuous-time Markov chains) are a simple and important class of continuous-time dynamical systems. In this paper, we tackle the problem of simulating from the posterior distribution over paths in these models, given partial and noisy observations. Our approach is an auxiliary variable Gibbs sampler, and is based on the idea of uniformization. This sets up a Markov chain over paths by alternately sampling a finite set of virtual jump times given the current path and then sampling a new path given the set of extant and virtual jump times using a standard hidden Markov model forward filtering-backward sampling algorithm. Our method is exact and does not involve approximations like time-discretization. We demonstrate how our sampler extends naturally to MJP-based models like Markov-modulated Poisson processes and continuous-time Bayesian networks and show significant computational benefits over state-of-the-art MCMC samplers for these models.Comment: Accepted at the Journal of Machine Learning Research (JMLR

    A BSDE-based approach for the optimal reinsurance problem under partial information

    Full text link
    We investigate the optimal reinsurance problem under the criterion of maximizing the expected utility of terminal wealth when the insurance company has restricted information on the loss process. We propose a risk model with claim arrival intensity and claim sizes distribution affected by an unobservable environmental stochastic factor. By filtering techniques (with marked point process observations), we reduce the original problem to an equivalent stochastic control problem under full information. Since the classical Hamilton-Jacobi-Bellman approach does not apply, due to the infinite dimensionality of the filter, we choose an alternative approach based on Backward Stochastic Differential Equations (BSDEs). Precisely, we characterize the value process and the optimal reinsurance strategy in terms of the unique solution to a BSDE driven by a marked point process.Comment: 30 pages, 3 figure

    Uncoupled Analysis of Stochastic Reaction Networks in Fluctuating Environments

    Full text link
    The dynamics of stochastic reaction networks within cells are inevitably modulated by factors considered extrinsic to the network such as for instance the fluctuations in ribsome copy numbers for a gene regulatory network. While several recent studies demonstrate the importance of accounting for such extrinsic components, the resulting models are typically hard to analyze. In this work we develop a general mathematical framework that allows to uncouple the network from its dynamic environment by incorporating only the environment's effect onto the network into a new model. More technically, we show how such fluctuating extrinsic components (e.g., chemical species) can be marginalized in order to obtain this decoupled model. We derive its corresponding process- and master equations and show how stochastic simulations can be performed. Using several case studies, we demonstrate the significance of the approach. For instance, we exemplarily formulate and solve a marginal master equation describing the protein translation and degradation in a fluctuating environment.Comment: 7 pages, 4 figures, Appendix attached as SI.pdf, under submissio

    Explicit computations for some Markov modulated counting processes

    Full text link
    In this paper we present elementary computations for some Markov modulated counting processes, also called counting processes with regime switching. Regime switching has become an increasingly popular concept in many branches of science. In finance, for instance, one could identify the background process with the `state of the economy', to which asset prices react, or as an identification of the varying default rate of an obligor. The key feature of the counting processes in this paper is that their intensity processes are functions of a finite state Markov chain. This kind of processes can be used to model default events of some companies. Many quantities of interest in this paper, like conditional characteristic functions, can all be derived from conditional probabilities, which can, in principle, be analytically computed. We will also study limit results for models with rapid switching, which occur when inflating the intensity matrix of the Markov chain by a factor tending to infinity. The paper is largely expository in nature, with a didactic flavor

    Filters and smoothers for self-exciting Markov modulated counting processes

    Full text link
    We consider a self-exciting counting process, the parameters of which depend on a hidden finite-state Markov chain. We derive the optimal filter and smoother for the hidden chain based on observation of the jump process. This filter is in closed form and is finite dimensional. We demonstrate the performance of this filter both with simulated data, and by analysing the `flash crash' of 6th May 2010 in this framework

    Filtering and forecasting commodity futures prices under an HMM framework

    Get PDF
    We propose a model for the evolution of arbitrage-free futures prices under a regime-switching framework. The estimation of model parameters is carried out using the hidden Markov filtering algorithms. Comprehensive numerical experiments on real financial market data are provided to illustrate the effectiveness of our algorithm. In particular, the model is calibrated with data from heating oil futures and its forecasting performance as well as statistical validity is investigated. The proposed model is parsimonious, self-calibrating and can be very useful in predicting futures prices. © 2013 Elsevier B.V
    • …
    corecore