6,369 research outputs found

    Asymptotic properties and optimization of some non-Markovian stochastic processes

    Get PDF
    summary:We study the limit behavior of certain classes of dependent random sequences (processes) which do not possess the Markov property. Assuming these processes depend on a control parameter we show that the optimization of the control can be reduced to a problem of nonlinear optimization. Under certain hypotheses we establish the stability of such optimization problems

    Rocking Subdiffusive Ratchets: Origin, Optimization and Efficiency

    Full text link
    We study origin, parameter optimization, and thermodynamic efficiency of isothermal rocking ratchets based on fractional subdiffusion within a generalized non-Markovian Langevin equation approach. A corresponding multi-dimensional Markovian embedding dynamics is realized using a set of auxiliary Brownian particles elastically coupled to the central Brownian particle (see video on the journal web site). We show that anomalous subdiffusive transport emerges due to an interplay of nonlinear response and viscoelastic effects for fractional Brownian motion in periodic potentials with broken space-inversion symmetry and driven by a time-periodic field. The anomalous transport becomes optimal for a subthreshold driving when the driving period matches a characteristic time scale of interwell transitions. It can also be optimized by varying temperature, amplitude of periodic potential and driving strength. The useful work done against a load shows a parabolic dependence on the load strength. It grows sublinearly with time and the corresponding thermodynamic efficiency decays algebraically in time because the energy supplied by the driving field scales with time linearly. However, it compares well with the efficiency of normal diffusion rocking ratchets on an appreciably long time scale

    Convergence and Convergence Rate of Stochastic Gradient Search in the Case of Multiple and Non-Isolated Extrema

    Full text link
    The asymptotic behavior of stochastic gradient algorithms is studied. Relying on results from differential geometry (Lojasiewicz gradient inequality), the single limit-point convergence of the algorithm iterates is demonstrated and relatively tight bounds on the convergence rate are derived. In sharp contrast to the existing asymptotic results, the new results presented here allow the objective function to have multiple and non-isolated minima. The new results also offer new insights into the asymptotic properties of several classes of recursive algorithms which are routinely used in engineering, statistics, machine learning and operations research

    Asymptotic Bias of Stochastic Gradient Search

    Get PDF
    The asymptotic behavior of the stochastic gradient algorithm with a biased gradient estimator is analyzed. Relying on arguments based on the dynamic system theory (chain-recurrence) and the differential geometry (Yomdin theorem and Lojasiewicz inequality), tight bounds on the asymptotic bias of the iterates generated by such an algorithm are derived. The obtained results hold under mild conditions and cover a broad class of high-dimensional nonlinear algorithms. Using these results, the asymptotic properties of the policy-gradient (reinforcement) learning and adaptive population Monte Carlo sampling are studied. Relying on the same results, the asymptotic behavior of the recursive maximum split-likelihood estimation in hidden Markov models is analyzed, too.Comment: arXiv admin note: text overlap with arXiv:0907.102
    • …
    corecore