12,403 research outputs found

    Controlled diffusion processes

    Full text link
    This article gives an overview of the developments in controlled diffusion processes, emphasizing key results regarding existence of optimal controls and their characterization via dynamic programming for a variety of cost criteria and structural assumptions. Stochastic maximum principle and control under partial observations (equivalently, control of nonlinear filters) are also discussed. Several other related topics are briefly sketched.Comment: Published at http://dx.doi.org/10.1214/154957805100000131 in the Probability Surveys (http://www.i-journals.org/ps/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Jump-Diffusion Risk-Sensitive Asset Management I: Diffusion Factor Model

    Full text link
    This paper considers a portfolio optimization problem in which asset prices are represented by SDEs driven by Brownian motion and a Poisson random measure, with drifts that are functions of an auxiliary diffusion factor process. The criterion, following earlier work by Bielecki, Pliska, Nagai and others, is risk-sensitive optimization (equivalent to maximizing the expected growth rate subject to a constraint on variance.) By using a change of measure technique introduced by Kuroda and Nagai we show that the problem reduces to solving a certain stochastic control problem in the factor process, which has no jumps. The main result of the paper is to show that the risk-sensitive jump diffusion problem can be fully characterized in terms of a parabolic Hamilton-Jacobi-Bellman PDE rather than a PIDE, and that this PDE admits a classical C^{1,2} solution.Comment: 33 page

    Distributional Probabilistic Model Checking

    Full text link
    Probabilistic model checking can provide formal guarantees on the behavior of stochastic models relating to a wide range of quantitative properties, such as runtime, energy consumption or cost. But decision making is typically with respect to the expected value of these quantities, which can mask important aspects of the full probability distribution such as the possibility of high-risk, low-probability events or multimodalities. We propose a distributional extension of probabilistic model checking, applicable to discrete-time Markov chains (DTMCs) and Markov decision processes (MDPs). We formulate distributional queries, which can reason about a variety of distributional measures, such as variance, value-at-risk or conditional value-at-risk, for the accumulation of reward until a co-safe linear temporal logic formula is satisfied. For DTMCs, we propose a method to compute the full distribution to an arbitrary level of precision, based on a graph analysis and forward analysis of the model. For MDPs, we approximate the optimal policy with respect to expected value or conditional value-at-risk using distributional value iteration. We implement our techniques and investigate their performance and scalability across a range of benchmark models. Experimental results demonstrate that our techniques can be successfully applied to check various distributional properties of large probabilistic models.Comment: 20 pages, 2 pages appendix, 5 figures. Submitted for review. For associated Github repository, see https://github.com/davexparker/prism/tree/ing

    On gradual-impulse control of continuous-time Markov decision processes with multiplicative cost

    Full text link
    In this paper, we consider the gradual-impulse control problem of continuous-time Markov decision processes, where the system performance is measured by the expectation of the exponential utility of the total cost. We prove, under very general conditions on the system primitives, the existence of a deterministic stationary optimal policy out of a more general class of policies. Policies that we consider allow multiple simultaneous impulses, randomized selection of impulses with random effects, relaxed gradual controls, and accumulation of jumps. After characterizing the value function using the optimality equation, we reduce the continuous-time gradual-impulse control problem to an equivalent simple discrete-time Markov decision process, whose action space is the union of the sets of gradual and impulsive actions
    • …
    corecore