170 research outputs found

    Impulse Control in Finance: Numerical Methods and Viscosity Solutions

    Full text link
    The goal of this thesis is to provide efficient and provably convergent numerical methods for solving partial differential equations (PDEs) coming from impulse control problems motivated by finance. Impulses, which are controlled jumps in a stochastic process, are used to model realistic features in financial problems which cannot be captured by ordinary stochastic controls. The dynamic programming equations associated with impulse control problems are Hamilton-Jacobi-Bellman quasi-variational inequalities (HJBQVIs) Other than in certain special cases, the numerical schemes that come from the discretization of HJBQVIs take the form of complicated nonlinear matrix equations also known as Bellman problems. We prove that a policy iteration algorithm can be used to compute their solutions. In order to do so, we employ the theory of weakly chained diagonally dominant (w.c.d.d.) matrices. As a byproduct of our analysis, we obtain some new results regarding a particular family of Markov decision processes which can be thought of as impulse control problems on a discrete state space and the relationship between w.c.d.d. matrices and M-matrices. Since HJBQVIs are nonlocal PDEs, we are unable to directly use the seminal result of Barles and Souganidis (concerning the convergence of monotone, stable, and consistent numerical schemes to the viscosity solution) to prove the convergence of our schemes. We address this issue by extending the work of Barles and Souganidis to nonlocal PDEs in a manner general enough to apply to HJBQVIs. We apply our schemes to compute the solutions of various classical problems from finance concerning optimal control of the exchange rate, optimal consumption with fixed and proportional transaction costs, and guaranteed minimum withdrawal benefits in variable annuities

    Impulse Control in Finance: Numerical Methods and Viscosity Solutions

    Get PDF
    The goal of this thesis is to provide efficient and provably convergent numerical methods for solving partial differential equations (PDEs) coming from impulse control problems motivated by finance. Impulses, which are controlled jumps in a stochastic process, are used to model realistic features in financial problems which cannot be captured by ordinary stochastic controls. The dynamic programming equations associated with impulse control problems are Hamilton-Jacobi-Bellman quasi-variational inequalities (HJBQVIs) Other than in certain special cases, the numerical schemes that come from the discretization of HJBQVIs take the form of complicated nonlinear matrix equations also known as Bellman problems. We prove that a policy iteration algorithm can be used to compute their solutions. In order to do so, we employ the theory of weakly chained diagonally dominant (w.c.d.d.) matrices. As a byproduct of our analysis, we obtain some new results regarding a particular family of Markov decision processes which can be thought of as impulse control problems on a discrete state space and the relationship between w.c.d.d. matrices and M-matrices. Since HJBQVIs are nonlocal PDEs, we are unable to directly use the seminal result of Barles and Souganidis (concerning the convergence of monotone, stable, and consistent numerical schemes to the viscosity solution) to prove the convergence of our schemes. We address this issue by extending the work of Barles and Souganidis to nonlocal PDEs in a manner general enough to apply to HJBQVIs. We apply our schemes to compute the solutions of various classical problems from finance concerning optimal control of the exchange rate, optimal consumption with fixed and proportional transaction costs, and guaranteed minimum withdrawal benefits in variable annuities

    A fixed-point policy-iteration-type algorithm for symmetric nonzero-sum stochastic impulse control games

    Get PDF
    Nonzero-sum stochastic differential games with impulse controls offer a realistic and far-reaching modelling framework for applications within finance, energy markets, and other areas, but the difficulty in solving such problems has hindered their proliferation. Semi-analytical approaches make strong assumptions pertaining to very particular cases. To the author’s best knowledge, the only numerical method in the literature is the heuristic one we put forward in Aïd et al (ESAIM Proc Surv 65:27–45, 2019) to solve an underlying system of quasi-variational inequalities. Focusing on symmetric games, this paper presents a simpler, more precise and efficient fixed-point policy-iteration-type algorithm which removes the strong dependence on the initial guess and the relaxation scheme of the previous method. A rigorous convergence analysis is undertaken with natural assumptions on the players strategies, which admit graph-theoretic interpretations in the context of weakly chained diagonally dominant matrices. A novel provably convergent single-player impulse control solver is also provided. The main algorithm is used to compute with high precision equilibrium payoffs and Nash equilibria of otherwise very challenging problems, and even some which go beyond the scope of the currently available theory

    Optimal market making under partial information and numerical methods for impulse control games with applications

    Get PDF
    The topics treated in this thesis are inherently two-fold. The first part considers the problem of a market maker who wants to optimally set bid/ask quotes over a finite time horizon, to maximize her expected utility. The intensities of the orders she receives depend not only on the spreads she quotes, but also on unobservable factors modelled by a hidden Markov chain. This stochastic control problem under partial information is solved by means of stochastic filtering, control and piecewise-deterministic Markov processes theory. The value function is characterized as the unique continuous viscosity solution of its dynamic programming equation. Afterwards, the analogous full information problem is solved and results are compared numerically through a concrete example. The optimal full information spreads are shown to be biased when the exact market regime is unknown, as the market maker needs to adjust for additional regime uncertainty in terms of P&L sensitivity and observable order ow volatility. The second part deals with numerically solving nonzero-sum stochastic differential games with impulse controls. These offer a realistic and far-reaching modelling framework for applications within finance, energy markets and other areas, but the diffculty in solving such problems has hindered their proliferation. Semi-analytical approaches make strong assumptions pertaining very particular cases. To the author's best knowledge, there are no numerical methods available in the literature. A policy-iteration-type solver is proposed to solve an underlying system of quasi-variational inequalities, and it is validated numerically with reassuring results. In particular, it is observed that the algorithm does not enjoy global convergence and a heuristic methodology is proposed to construct initial guesses. Eventually, the focus is put on games with a symmetric structure and a substantially improved version of the former algorithm is put forward. A rigorous convergence analysis is undertaken with natural assumptions on the players strategies, which admit graph-theoretic interpretations in the context of weakly chained diagonally dominant matrices. A provably convergent single-player impulse control solver, often outperforming classical policy iteration, is also provided. The main algorithm is used to compute with high precision equilibrium payoffs and Nash equilibria of otherwise too challenging problems, and even some for which results go beyond the scope of all the currently available theory

    Error estimates of penalty schemes for quasi-variational inequalities arising from impulse control problems

    Full text link
    This paper proposes penalty schemes for a class of weakly coupled systems of Hamilton-Jacobi-Bellman quasi-variational inequalities (HJBQVIs) arising from stochastic hybrid control problems of regime-switching models with both continuous and impulse controls. We show that the solutions of the penalized equations converge monotonically to those of the HJBQVIs. We further establish that the schemes are half-order accurate for HJBQVIs with Lipschitz coefficients, and first-order accurate for equations with more regular coefficients. Moreover, we construct the action regions and optimal impulse controls based on the error estimates and the penalized solutions. The penalty schemes and convergence results are then extended to HJBQVIs with possibly negative impulse costs. We also demonstrate the convergence of monotone discretizations of the penalized equations, and establish that policy iteration applied to the discrete equation is monotonically convergent with an arbitrary initial guess in an infinite dimensional setting. Numerical examples for infinite-horizon optimal switching problems are presented to illustrate the effectiveness of the penalty schemes over the conventional direct control scheme.Comment: Accepted for publication in SIAM Journal on Control and Optimizatio

    A penalty scheme for monotone systems with interconnected obstacles: convergence and error estimates

    Full text link
    We present a novel penalty approach for a class of quasi-variational inequalities (QVIs) involving monotone systems and interconnected obstacles. We show that for any given positive switching cost, the solutions of the penalized equations converge monotonically to those of the QVIs. We estimate the penalization errors and are able to deduce that the optimal switching regions are constructed exactly. We further demonstrate that as the switching cost tends to zero, the QVI degenerates into an equation of HJB type, which is approximated by the penalized equation at the same order (up to a log factor) as that for positive switching cost. Numerical experiments on optimal switching problems are presented to illustrate the theoretical results and to demonstrate the effectiveness of the method.Comment: Accepted for publication (in this revised form) in SIAM Journal on Numerical Analysi

    The Cover Time of a (Multiple) Markov Chain with Rational Transition Probabilities is Rational

    Get PDF
    The cover time of a Markov chain on a finite state space is the expected time until all states are visited. We show that if the cover time of a discrete-time Markov chain with rational transitions probabilities is bounded, then it is a rational number. The result is proved by relating the cover time of the original chain to the hitting time of a set in another higher dimensional chain. We also extend this result to the setting where k≥1k\geq 1 independent copies of a Markov chain are run simultaneously on the same state space and the cover time is the expected time until each state has been visited by at least one copy of the chain.Comment: 8 pages, 1 figure. The Proof of Proposition 8 has been simplifie
    • …
    corecore