5,280 research outputs found
A survey of max-type recursive distributional equations
In certain problems in a variety of applied probability settings (from
probabilistic analysis of algorithms to statistical physics), the central
requirement is to solve a recursive distributional equation of the form X =^d
g((\xi_i,X_i),i\geq 1). Here (\xi_i) and g(\cdot) are given and the X_i are
independent copies of the unknown distribution X. We survey this area,
emphasizing examples where the function g(\cdot) is essentially a ``maximum''
or ``minimum'' function. We draw attention to the theoretical question of
endogeny: in the associated recursive tree process X_i, are the X_i measurable
functions of the innovations process (\xi_i)?Comment: Published at http://dx.doi.org/10.1214/105051605000000142 in the
Annals of Applied Probability (http://www.imstat.org/aap/) by the Institute
of Mathematical Statistics (http://www.imstat.org
Hitting times and probabilities for imprecise Markov chains
We consider the problem of characterising expected hitting times and hitting probabilities for imprecise Markov chains. To this end, we consider three distinct ways in which imprecise Markov chains have been defined in the literature: as sets of homogeneous Markov chains, as sets of more general stochastic processes, and as game-theoretic probability models. Our first contribution is that all these different types of imprecise Markov chains have the same lower and upper expected hitting times, and similarly the hitting probabilities are the same for these three types. Moreover, we provide a characterisation of these quantities that directly generalises a similar characterisation for precise, homogeneous Markov chains
Convergence and Convergence Rate of Stochastic Gradient Search in the Case of Multiple and Non-Isolated Extrema
The asymptotic behavior of stochastic gradient algorithms is studied. Relying
on results from differential geometry (Lojasiewicz gradient inequality), the
single limit-point convergence of the algorithm iterates is demonstrated and
relatively tight bounds on the convergence rate are derived. In sharp contrast
to the existing asymptotic results, the new results presented here allow the
objective function to have multiple and non-isolated minima. The new results
also offer new insights into the asymptotic properties of several classes of
recursive algorithms which are routinely used in engineering, statistics,
machine learning and operations research
- …