28 research outputs found

    Game-theoretic approach to risk-sensitive benchmarked asset management

    Get PDF
    In this article we consider a game theoretic approach to the Risk-Sensitive Benchmarked Asset Management problem (RSBAM) of Davis and Lleo \cite{DL}. In particular, we consider a stochastic differential game between two players, namely, the investor who has a power utility while the second player represents the market which tries to minimize the expected payoff of the investor. The market does this by modulating a stochastic benchmark that the investor needs to outperform. We obtain an explicit expression for the optimal pair of strategies as for both the players.Comment: Forthcoming in Risk and Decision Analysis. arXiv admin note: text overlap with arXiv:0905.4740 by other author

    Optimal co-adapted coupling for the symmetric random walk on the hypercube

    Get PDF
    Let X and Y be two simple symmetric continuous-time random walks on the vertices of the n-dimensional hypercube, Z2n. We consider the class of co-adapted couplings of these processes, and describe an intuitive coupling which is shown to be the fastest in this class

    Conditioning an additive functional of a markov chain to stay non-negative. I, Survival for a long time

    Get PDF
    Let (X-t)(t >= 0) be a continuous-time irreducible Markov chain on a finite state space E, let v be a map v: E -> R \ {0}, and let (phi(t))(t >= 0) be an additive functional defined by phi(t) = integral(0)(t)(X-s) ds. We consider the case in which the process (phi(t))(t >= 0) is oscillating and that in which (phi(t))(t >= 0) has a negative drift. In each of these cases, we condition the process (X-t, phi(t))(t >= 0) on the event that (phi(t))(t >= 0) is nonnegative until time T and prove weak convergence of the conditioned process as T -> infinity

    Conditioning an additive functional of a markov chain to stay nonnegative. II, Hitting a high level

    Get PDF
    Let (X-t)(t >= 0) be a continuous-time irreducible Markov chain on a finite state space E, let v: E -> R \ {0}, and let (phi(t))(t >= 0) be defined by phi(t) = integral(0)(t) v(X-s) ds. We consider the case in which the process (phi(t))(t >= 0) is oscillating and that in which (phi(t))(t >= 0) has a negative drift. In each of these cases, we condition the process (X-t, phi(t))(t >= 0) on the event that (phi(t))(t >= 0) hits level y before hitting 0 and prove weak convergence of the conditioned process as y -> infinity. In addition, we show the relationship between the conditioning of the process (phi(t))(t >= 0) with a negative drift to oscillate and the conditioning of it to stay nonnegative for a long time, and the relationship between the conditioning of (phi(t))(t >= 0) with a negative drift to drift to infinity and the conditioning of it to hit large levels before hitting 0

    On the compensator in the Doob-Meyer decomposition of the Snell envelope

    Get PDF
    Let GG be a semimartingale, and SS its Snell envelope. Under the assumption that GāˆˆH1G\in\mathcal{H}^1, we show that the finite-variation part of SS is absolutely continuous with respect to the decreasing part of the finite-variation part of GG. In the Markovian setting, this enables us to identify sufficient conditions for the value function of the optimal stopping problem to belong to the domain of the extended (martingale) generator of the underlying Markov process. We then show that the \textit{dual} of the optimal stopping problem is a stochastic control problem for a controlled Markov process, and the optimal control is characterised by a function belonging to the domain of the martingale generator. Finally, we give an application to the smooth pasting condition

    On the policy improvement algorithm in continuous time

    Get PDF
    We develop a general approach to the Policy Improvement Algorithm (PIA) for stochastic control problems for continuous-time processes. The main results assume only that the controls lie in a compact metric space and give general sufficient conditions for the PIA to be well-defined and converge in continuous time (i.e. without time discretisation). It emerges that the natural context for the PIA in continuous time is weak stochastic control. We give examples of control problems demonstrating the need for the weak formulation as well as diffusion-based classes of problems where the PIA in continuous time is applicable
    corecore