10,533 research outputs found
Subgame-perfect equilibrium strategies for time-inconsistent recursive stochastic control problems
We study time-inconsistent recursive stochastic control problems. Since for
this class of problems classical optimal controls may fail to exist or to be
relevant in practice, we focus on subgame-perfect equilibrium policies. The
approach followed in our work relies on the stochastic maximum principle: we
adapt the classical spike variation technique to obtain a characterization of
equilibrium strategies in terms of a generalized second-order Hamiltonian
function defined through a pair of backward stochastic differential equations.
The theoretical results are applied in the financial field to finite horizon
investment-consumption policies with non-exponential actualization.Comment: arXiv admin note: substantial text overlap with arXiv:2105.0147
Optimal Controls for Forward-Backward Stochastic Differential Equations: Time-Inconsistency and Time-Consistent Solutions
This paper is concerned with an optimal control problem for a
forward-backward stochastic differential equation (FBSDE, for short) with a
recursive cost functional determined by a backward stochastic Volterra integral
equation (BSVIE, for short). It is found that such an optimal control problem
is time-inconsistent in general, even if the cost functional is reduced to a
classical Bolza type one as in Peng [50], Lim-Zhou [41], and Yong [74].
Therefore, instead of finding a global optimal control (which is
time-inconsistent), we will look for a time-consistent and locally optimal
equilibrium strategy, which can be constructed via the solution of an
associated equilibrium Hamilton-Jacobi-Bellman (HJB, for short) equation. A
verification theorem for the local optimality of the equilibrium strategy is
proved by means of the generalized Feynman-Kac formula for BSVIEs and some
stability estimates of the representation for parabolic partial differential
equations (PDEs, for short). Under certain conditions, it is proved that the
equilibrium HJB equation, which is a nonlocal PDE, admits a unique classical
solution. As special cases and applications, the linear-quadratic problems, a
mean-variance model, a social planner problem with heterogeneous Epstein-Zin
utilities, and a Stackelberg game are briefly investigated. It turns out that
our framework can cover not only the optimal control problems for FBSDEs
studied in [50,41,74], and so on, but also the problems of the general
discounting and some nonlinear appearance of conditional expectations for the
terminal state, studied in Yong [75,77] and Bj\"{o}rk-Khapko-Murgoci [7]
Time-Consistent Mean-Variance Portfolio Selection in Discrete and Continuous Time
It is well known that mean-variance portfolio selection is a
time-inconsistent optimal control problem in the sense that it does not satisfy
Bellman's optimality principle and therefore the usual dynamic programming
approach fails. We develop a time- consistent formulation of this problem,
which is based on a local notion of optimality called local mean-variance
efficiency, in a general semimartingale setting. We start in discrete time,
where the formulation is straightforward, and then find the natural extension
to continuous time. This complements and generalises the formulation by Basak
and Chabakauri (2010) and the corresponding example in Bj\"ork and Murgoci
(2010), where the treatment and the notion of optimality rely on an underlying
Markovian framework. We justify the continuous-time formulation by showing that
it coincides with the continuous-time limit of the discrete-time formulation.
The proof of this convergence is based on a global description of the locally
optimal strategy in terms of the structure condition and the
F\"ollmer-Schweizer decomposition of the mean-variance tradeoff. As a
byproduct, this also gives new convergence results for the F\"ollmer-Schweizer
decomposition, i.e. for locally risk minimising strategies
A tutorial on recursive models for analyzing and predicting path choice behavior
The problem at the heart of this tutorial consists in modeling the path
choice behavior of network users. This problem has been extensively studied in
transportation science, where it is known as the route choice problem. In this
literature, individuals' choice of paths are typically predicted using discrete
choice models. This article is a tutorial on a specific category of discrete
choice models called recursive, and it makes three main contributions: First,
for the purpose of assisting future research on route choice, we provide a
comprehensive background on the problem, linking it to different fields
including inverse optimization and inverse reinforcement learning. Second, we
formally introduce the problem and the recursive modeling idea along with an
overview of existing models, their properties and applications. Third, we
extensively analyze illustrative examples from different angles so that a
novice reader can gain intuition on the problem and the advantages provided by
recursive models in comparison to path-based ones
- …