153,558 research outputs found

    Discounting and Patience in Optimal Stopping and Control Problems

    Get PDF
    This paper establishes that the optimal stopping time of virtually any optimal stopping problem is increasing in "patience," understood as a particular partial order on discount rate functions. With Markov dynamics, the result holds in a continuation- domain sense even if stopping is combined with an optimal control problem. Under intuitive additional assumptions, we obtain comparative statics on both the optimal control and optimal stopping time for one-dimensional diusions. We provide a simple example where, without these assumptions, increased patience can precipitate stopping. We also show that, with optimal stopping and control, a project's expected value is decreasing in the interest rate, generalizing analogous results in a deterministic context. All our results are robust to the presence of a salvage value. As an application we show that the internal rate of return of any endogenously-interrupted project is essentially unique, even if the project also involves a management problem until its interruption. We also apply our results to the theory of optimal growth and capital deepening and to optimal bankruptcy decisions.capital growth, comparative statics, discounting, internal rate of return, optimal control, optimal stopping, patience, present value, project valuation

    A nonparametric algorithm for optimal stopping based on robust optimization

    Full text link
    Optimal stopping is a fundamental class of stochastic dynamic optimization problems with numerous applications in finance and operations management. We introduce a new approach for solving computationally-demanding stochastic optimal stopping problems with known probability distributions. The approach uses simulation to construct a robust optimization problem that approximates the stochastic optimal stopping problem to any arbitrary accuracy; we then solve the robust optimization problem to obtain near-optimal Markovian stopping rules for the stochastic optimal stopping problem. In this paper, we focus on designing algorithms for solving the robust optimization problems that approximate the stochastic optimal stopping problems. These robust optimization problems are challenging to solve because they require optimizing over the infinite-dimensional space of all Markovian stopping rules. We overcome this challenge by characterizing the structure of optimal Markovian stopping rules for the robust optimization problems. In particular, we show that optimal Markovian stopping rules for the robust optimization problems have a structure that is surprisingly simple and finite-dimensional. We leverage this structure to develop an exact reformulation of the robust optimization problem as a zero-one bilinear program over totally unimodular constraints. We show that the bilinear program can be solved in polynomial time in special cases, establish computational complexity results for general cases, and develop polynomial-time heuristics by relating the bilinear program to the maximal closure problem from graph theory. Numerical experiments demonstrate that our algorithms for solving the robust optimization problems are practical and can outperform state-of-the-art simulation-based algorithms in the context of widely-studied stochastic optimal stopping problems from high-dimensional option pricing

    Sequential testing problems for some diffusion processes

    Get PDF
    We study the Bayesian problem of sequential testing of two simple hypotheses about the local drift of an observed diffusion process. The optimal stopping time is found as the first time when the a posteriori probability process leaves the region defined by two stochastic boundaries depending on the observation process. It is shown that under some nontrivial relationships on the coefficients of the observed diffusion the problem admits a closed form solution. The method of proof is based on embedding the initial problem into a two-dimensional optimal stopping problem and solving the equivalent free-boundary problem by means of the smooth-fit conditions

    A Linear Programming Approach to Sequential Hypothesis Testing

    Full text link
    Under some mild Markov assumptions it is shown that the problem of designing optimal sequential tests for two simple hypotheses can be formulated as a linear program. The result is derived by investigating the Lagrangian dual of the sequential testing problem, which is an unconstrained optimal stopping problem, depending on two unknown Lagrangian multipliers. It is shown that the derivative of the optimal cost function with respect to these multipliers coincides with the error probabilities of the corresponding sequential test. This property is used to formulate an optimization problem that is jointly linear in the cost function and the Lagrangian multipliers and an be solved for both with off-the-shelf algorithms. To illustrate the procedure, optimal sequential tests for Gaussian random sequences with different dependency structures are derived, including the Gaussian AR(1) process.Comment: 25 pages, 4 figures, accepted for publication in Sequential Analysi
    corecore