9,096 research outputs found

    General Stopping Behaviors of Naive and Non-Committed Sophisticated Agents, with Application to Probability Distortion

    Full text link
    We consider the problem of stopping a diffusion process with a payoff functional that renders the problem time-inconsistent. We study stopping decisions of naive agents who reoptimize continuously in time, as well as equilibrium strategies of sophisticated agents who anticipate but lack control over their future selves' behaviors. When the state process is one dimensional and the payoff functional satisfies some regularity conditions, we prove that any equilibrium can be obtained as a fixed point of an operator. This operator represents strategic reasoning that takes the future selves' behaviors into account. We then apply the general results to the case when the agents distort probability and the diffusion process is a geometric Brownian motion. The problem is inherently time-inconsistent as the level of distortion of a same event changes over time. We show how the strategic reasoning may turn a naive agent into a sophisticated one. Moreover, we derive stopping strategies of the two types of agent for various parameter specifications of the problem, illustrating rich behaviors beyond the extreme ones such as "never-stopping" or "never-starting"

    Optimal stopping under probability distortion

    Get PDF
    We formulate an optimal stopping problem for a geometric Brownian motion where the probability scale is distorted by a general nonlinear function. The problem is inherently time inconsistent due to the Choquet integration involved. We develop a new approach, based on a reformulation of the problem where one optimally chooses the probability distribution or quantile function of the stopped state. An optimal stopping time can then be recovered from the obtained distribution/quantile function, either in a straightforward way for several important cases or in general via the Skorokhod embedding. This approach enables us to solve the problem in a fairly general manner with different shapes of the payoff and probability distortion functions. We also discuss economical interpretations of the results. In particular, we justify several liquidation strategies widely adopted in stock trading, including those of "buy and hold", "cut loss or take profit", "cut loss and let profit run" and "sell on a percentage of historical high".Comment: Published in at http://dx.doi.org/10.1214/11-AAP838 the Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Time-Consistent Mean-Variance Portfolio Selection in Discrete and Continuous Time

    Full text link
    It is well known that mean-variance portfolio selection is a time-inconsistent optimal control problem in the sense that it does not satisfy Bellman's optimality principle and therefore the usual dynamic programming approach fails. We develop a time- consistent formulation of this problem, which is based on a local notion of optimality called local mean-variance efficiency, in a general semimartingale setting. We start in discrete time, where the formulation is straightforward, and then find the natural extension to continuous time. This complements and generalises the formulation by Basak and Chabakauri (2010) and the corresponding example in Bj\"ork and Murgoci (2010), where the treatment and the notion of optimality rely on an underlying Markovian framework. We justify the continuous-time formulation by showing that it coincides with the continuous-time limit of the discrete-time formulation. The proof of this convergence is based on a global description of the locally optimal strategy in terms of the structure condition and the F\"ollmer-Schweizer decomposition of the mean-variance tradeoff. As a byproduct, this also gives new convergence results for the F\"ollmer-Schweizer decomposition, i.e. for locally risk minimising strategies
    corecore