11,018 research outputs found
Shape-constrained Estimation of Value Functions
We present a fully nonparametric method to estimate the value function, via
simulation, in the context of expected infinite-horizon discounted rewards for
Markov chains. Estimating such value functions plays an important role in
approximate dynamic programming and applied probability in general. We
incorporate "soft information" into the estimation algorithm, such as knowledge
of convexity, monotonicity, or Lipchitz constants. In the presence of such
information, a nonparametric estimator for the value function can be computed
that is provably consistent as the simulated time horizon tends to infinity. As
an application, we implement our method on price tolling agreement contracts in
energy markets
Online Stochastic Gradient Descent with Arbitrary Initialization Solves Non-smooth, Non-convex Phase Retrieval
In recent literature, a general two step procedure has been formulated for
solving the problem of phase retrieval. First, a spectral technique is used to
obtain a constant-error initial estimate, following which, the estimate is
refined to arbitrary precision by first-order optimization of a non-convex loss
function. Numerical experiments, however, seem to suggest that simply running
the iterative schemes from a random initialization may also lead to
convergence, albeit at the cost of slightly higher sample complexity. In this
paper, we prove that, in fact, constant step size online stochastic gradient
descent (SGD) converges from arbitrary initializations for the non-smooth,
non-convex amplitude squared loss objective. In this setting, online SGD is
also equivalent to the randomized Kaczmarz algorithm from numerical analysis.
Our analysis can easily be generalized to other single index models. It also
makes use of new ideas from stochastic process theory, including the notion of
a summary state space, which we believe will be of use for the broader field of
non-convex optimization
Recommended from our members
Generalized Stochastic Gradient Learning
We study the properties of generalized stochastic gradient (GSG) learning in forwardlooking models. We examine how the conditions for stability of standard stochastic gradient (SG) learning both di1er from and are related to E-stability, which governs stability under least squares learning. SG algorithms are sensitive to units of measurement and we show that there is a transformation of variables for which E-stability governs SG stability. GSG algorithms with constant gain have a deeper justification in terms of parameter drift, robustness and risk sensitivity
Recursive Aggregation of Estimators by Mirror Descent Algorithm with Averaging
We consider a recursive algorithm to construct an aggregated estimator from a
finite number of base decision rules in the classification problem. The
estimator approximately minimizes a convex risk functional under the
l1-constraint. It is defined by a stochastic version of the mirror descent
algorithm (i.e., of the method which performs gradient descent in the dual
space) with an additional averaging. The main result of the paper is an upper
bound for the expected accuracy of the proposed estimator. This bound is of the
order with an explicit and small constant factor, where
is the dimension of the problem and stands for the sample size. A similar
bound is proved for a more general setting that covers, in particular, the
regression model with squared loss.Comment: 29 pages; mai 200
- …