1,267 research outputs found

    Average Continuous Control of Piecewise Deterministic Markov Processes

    Full text link
    This paper deals with the long run average continuous control problem of piecewise deterministic Markov processes (PDMP's) taking values in a general Borel space and with compact action space depending on the state variable. The control variable acts on the jump rate and transition measure of the PDMP, and the running and boundary costs are assumed to be positive but not necessarily bounded. Our first main result is to obtain an optimality equation for the long run average cost in terms of a discrete-time optimality equation related to the embedded Markov chain given by the post-jump location of the PDMP. Our second main result guarantees the existence of a feedback measurable selector for the discrete-time optimality equation by establishing a connection between this equation and an integro-differential equation. Our final main result is to obtain some sufficient conditions for the existence of a solution for a discrete-time optimality inequality and an ordinary optimal feedback control for the long run average cost using the so-called vanishing discount approach.Comment: 34 page

    Quality Control for Structural Credit Risk Models

    Get PDF
    Over the last four decades, a large number of structural models have been developed to estimate and price credit risk. The focus of the paper is on a neglected issue pertaining to fundamental shifts in the structural parameters governing default. We propose formal quality control procedures that allow risk managers to monitor fundamental shifts in the structural parameters of credit risk models. The procedures are sequential - hence apply in real time. The basic ingredients are the key processes used in credit risk analysis, such as most prominently the Merton distance to default process as well as financial returns. Moreover, while we propose different monitoring processes, we also show that one particular process is optimal in terms of minimal detection time of a break in the drift process and relates to the Radon-Nikodym derivative for a change of measure.

    Optimal Navigation Functions for Nonlinear Stochastic Systems

    Full text link
    This paper presents a new methodology to craft navigation functions for nonlinear systems with stochastic uncertainty. The method relies on the transformation of the Hamilton-Jacobi-Bellman (HJB) equation into a linear partial differential equation. This approach allows for optimality criteria to be incorporated into the navigation function, and generalizes several existing results in navigation functions. It is shown that the HJB and that existing navigation functions in the literature sit on ends of a spectrum of optimization problems, upon which tradeoffs may be made in problem complexity. In particular, it is shown that under certain criteria the optimal navigation function is related to Laplace's equation, previously used in the literature, through an exponential transform. Further, analytical solutions to the HJB are available in simplified domains, yielding guidance towards optimality for approximation schemes. Examples are used to illustrate the role that noise, and optimality can potentially play in navigation system design.Comment: Accepted to IROS 2014. 8 Page

    On gradual-impulse control of continuous-time Markov decision processes with multiplicative cost

    Full text link
    In this paper, we consider the gradual-impulse control problem of continuous-time Markov decision processes, where the system performance is measured by the expectation of the exponential utility of the total cost. We prove, under very general conditions on the system primitives, the existence of a deterministic stationary optimal policy out of a more general class of policies. Policies that we consider allow multiple simultaneous impulses, randomized selection of impulses with random effects, relaxed gradual controls, and accumulation of jumps. After characterizing the value function using the optimality equation, we reduce the continuous-time gradual-impulse control problem to an equivalent simple discrete-time Markov decision process, whose action space is the union of the sets of gradual and impulsive actions
    corecore