107,504 research outputs found

    Asymptotic Bias of Stochastic Gradient Search

    Get PDF
    The asymptotic behavior of the stochastic gradient algorithm with a biased gradient estimator is analyzed. Relying on arguments based on the dynamic system theory (chain-recurrence) and the differential geometry (Yomdin theorem and Lojasiewicz inequality), tight bounds on the asymptotic bias of the iterates generated by such an algorithm are derived. The obtained results hold under mild conditions and cover a broad class of high-dimensional nonlinear algorithms. Using these results, the asymptotic properties of the policy-gradient (reinforcement) learning and adaptive population Monte Carlo sampling are studied. Relying on the same results, the asymptotic behavior of the recursive maximum split-likelihood estimation in hidden Markov models is analyzed, too.Comment: arXiv admin note: text overlap with arXiv:0907.102

    On the computation of directional scale-discretized wavelet transforms on the sphere

    Get PDF
    We review scale-discretized wavelets on the sphere, which are directional and allow one to probe oriented structure in data defined on the sphere. Furthermore, scale-discretized wavelets allow in practice the exact synthesis of a signal from its wavelet coefficients. We present exact and efficient algorithms to compute the scale-discretized wavelet transform of band-limited signals on the sphere. These algorithms are implemented in the publicly available S2DW code. We release a new version of S2DW that is parallelized and contains additional code optimizations. Note that scale-discretized wavelets can be viewed as a directional generalization of needlets. Finally, we outline future improvements to the algorithms presented, which can be achieved by exploiting a new sampling theorem on the sphere developed recently by some of the authors.Comment: 13 pages, 3 figures, Proceedings of Wavelets and Sparsity XV, SPIE Optics and Photonics 2013, Code is publicly available at http://www.s2dw.org

    A Bayesian approach to the study of white dwarf binaries in LISA data: The application of a reversible jump Markov chain Monte Carlo method

    Full text link
    The Laser Interferometer Space Antenna (LISA) defines new demands on data analysis efforts in its all-sky gravitational wave survey, recording simultaneously thousands of galactic compact object binary foreground sources and tens to hundreds of background sources like binary black hole mergers and extreme mass ratio inspirals. We approach this problem with an adaptive and fully automatic Reversible Jump Markov Chain Monte Carlo sampler, able to sample from the joint posterior density function (as established by Bayes theorem) for a given mixture of signals "out of the box'', handling the total number of signals as an additional unknown parameter beside the unknown parameters of each individual source and the noise floor. We show in examples from the LISA Mock Data Challenge implementing the full response of LISA in its TDI description that this sampler is able to extract monochromatic Double White Dwarf signals out of colored instrumental noise and additional foreground and background noise successfully in a global fitting approach. We introduce 2 examples with fixed number of signals (MCMC sampling), and 1 example with unknown number of signals (RJ-MCMC), the latter further promoting the idea behind an experimental adaptation of the model indicator proposal densities in the main sampling stage. We note that the experienced runtimes and degeneracies in parameter extraction limit the shown examples to the extraction of a low but realistic number of signals.Comment: 18 pages, 9 figures, 3 tables, accepted for publication in PRD, revised versio

    On a variance related to the Ewens sampling formula

    Get PDF
    A one-parameter multivariate distribution, called the Ewens sampling formula, was introduced in 1972 to model the mutation phenomenon in genetics. The case discussed in this note goes back to Lynch’s theorem in the random binary search tree theory. We examine an additive statistics, being a sum of dependent random variables, and find an upper bound of its variance in terms of the sum of variances of summands. The asymptotically best constant in this estimate is established as the dimension increases. The approach is based on approximation of the extremal eigenvalues of appropriate integral operators and matrices
    corecore