7,568 research outputs found
Bounding stationary averages of polynomial diffusions via semidefinite programming
We introduce an algorithm based on semidefinite programming that yields
increasing (resp. decreasing) sequences of lower (resp. upper) bounds on
polynomial stationary averages of diffusions with polynomial drift vector and
diffusion coefficients. The bounds are obtained by optimising an objective,
determined by the stationary average of interest, over the set of real vectors
defined by certain linear equalities and semidefinite inequalities which are
satisfied by the moments of any stationary measure of the diffusion. We
exemplify the use of the approach through several applications: a Bayesian
inference problem; the computation of Lyapunov exponents of linear ordinary
differential equations perturbed by multiplicative white noise; and a
reliability problem from structural mechanics. Additionally, we prove that the
bounds converge to the infimum and supremum of the set of stationary averages
for certain SDEs associated with the computation of the Lyapunov exponents, and
we provide numerical evidence of convergence in more general settings
The Euler-Maruyama approximation for the absorption time of the CEV diffusion
A standard convergence analysis of the simulation schemes for the hitting
times of diffusions typically requires non-degeneracy of their coefficients on
the boundary, which excludes the possibility of absorption. In this paper we
consider the CEV diffusion from the mathematical finance and show how a weakly
consistent approximation for the absorption time can be constructed, using the
Euler-Maruyama scheme
From Infinite to Finite Programs: Explicit Error Bounds with Applications to Approximate Dynamic Programming
We consider linear programming (LP) problems in infinite dimensional spaces
that are in general computationally intractable. Under suitable assumptions, we
develop an approximation bridge from the infinite-dimensional LP to tractable
finite convex programs in which the performance of the approximation is
quantified explicitly. To this end, we adopt the recent developments in two
areas of randomized optimization and first order methods, leading to a priori
as well as a posterior performance guarantees. We illustrate the generality and
implications of our theoretical results in the special case of the long-run
average cost and discounted cost optimal control problems for Markov decision
processes on Borel spaces. The applicability of the theoretical results is
demonstrated through a constrained linear quadratic optimal control problem and
a fisheries management problem.Comment: 30 pages, 5 figure
Optimization of mesh hierarchies in Multilevel Monte Carlo samplers
We perform a general optimization of the parameters in the Multilevel Monte
Carlo (MLMC) discretization hierarchy based on uniform discretization methods
with general approximation orders and computational costs. We optimize
hierarchies with geometric and non-geometric sequences of mesh sizes and show
that geometric hierarchies, when optimized, are nearly optimal and have the
same asymptotic computational complexity as non-geometric optimal hierarchies.
We discuss how enforcing constraints on parameters of MLMC hierarchies affects
the optimality of these hierarchies. These constraints include an upper and a
lower bound on the mesh size or enforcing that the number of samples and the
number of discretization elements are integers. We also discuss the optimal
tolerance splitting between the bias and the statistical error contributions
and its asymptotic behavior. To provide numerical grounds for our theoretical
results, we apply these optimized hierarchies together with the Continuation
MLMC Algorithm. The first example considers a three-dimensional elliptic
partial differential equation with random inputs. Its space discretization is
based on continuous piecewise trilinear finite elements and the corresponding
linear system is solved by either a direct or an iterative solver. The second
example considers a one-dimensional It\^o stochastic differential equation
discretized by a Milstein scheme
Risk-Sensitive Reinforcement Learning: A Constrained Optimization Viewpoint
The classic objective in a reinforcement learning (RL) problem is to find a
policy that minimizes, in expectation, a long-run objective such as the
infinite-horizon discounted or long-run average cost. In many practical
applications, optimizing the expected value alone is not sufficient, and it may
be necessary to include a risk measure in the optimization process, either as
the objective or as a constraint. Various risk measures have been proposed in
the literature, e.g., mean-variance tradeoff, exponential utility, the
percentile performance, value at risk, conditional value at risk, prospect
theory and its later enhancement, cumulative prospect theory. In this article,
we focus on the combination of risk criteria and reinforcement learning in a
constrained optimization framework, i.e., a setting where the goal to find a
policy that optimizes the usual objective of infinite-horizon
discounted/average cost, while ensuring that an explicit risk constraint is
satisfied. We introduce the risk-constrained RL framework, cover popular risk
measures based on variance, conditional value-at-risk and cumulative prospect
theory, and present a template for a risk-sensitive RL algorithm. We survey
some of our recent work on this topic, covering problems encompassing
discounted cost, average cost, and stochastic shortest path settings, together
with the aforementioned risk measures in a constrained framework. This
non-exhaustive survey is aimed at giving a flavor of the challenges involved in
solving a risk-sensitive RL problem, and outlining some potential future
research directions
An Optimization Approach to Weak Approximation of Lévy-Driven Stochastic Differential Equations
We propose an optimization approach to weak approximation of Lévy-driven stochastic differential equations. We employ a mathematical programming framework to obtain numerically upper and lower bound estimates of the target expectation, where the optimization procedure ends up with a polynomial programming problem. An advantage of our approach is that all we need is a closed form of the Lévy measure, not the exact simulation knowledge of the increments or of a shot noise representation for the time discretization approximation. We also investigate methods for approximation at some different intermediate time points simultaneously
Peak Value-at-Risk Estimation for Stochastic Differential Equations using Occupation Measures
This paper proposes an algorithm to upper-bound maximal quantile statistics
of a state function over the course of a Stochastic Differential Equation (SDE)
system execution. This chance-peak problem is posed as a nonconvex program
aiming to maximize the Value-at-Risk (VaR) of a state function along SDE state
distributions. The VaR problem is upper-bounded by an infinite-dimensional
Second-Order Cone Program in occupation measures through the use of one-sided
Cantelli or Vysochanskii-Petunin inequalities. These upper bounds on the true
quantile statistics may be approximated from above by a sequence of
Semidefinite Programs in increasing size using the moment-Sum-of-Squares
hierarchy when all data is polynomial. Effectiveness of this approach is
demonstrated on example stochastic polynomial dynamical systems.Comment: 21 pages, 4 figures, 10 table
- …