6,166 research outputs found

    Analyzing weak lensing of the cosmic microwave background using the likelihood function

    Get PDF
    Future experiments will produce high-resolution temperature maps of the cosmic microwave background (CMB) and are expected to reveal the signature of gravitational lensing by intervening large-scale structures. We construct all-sky maximum-likelihood estimators that use the lensing effect to estimate the projected density (convergence) of these structures, its power spectrum, and cross-correlation with other observables. This contrasts with earlier quadratic-estimator approaches that Taylor expanded the observed CMB temperature to linear order in the lensing deflection angle; these approaches gave estimators for the temperature-convergence correlation in terms of the CMB three-point correlation function and for the convergence power spectrum in terms of the CMB four-point correlation function, which can be biased and nonoptimal due to terms beyond the linear order. We show that for sufficiently weak lensing, the maximum-likelihood estimator reduces to the computationally less demanding quadratic estimator. The maximum likelihood and quadratic approaches are compared by evaluating the root-mean-square (rms) error and bias in the reconstructed convergence map in a numerical simulation; it is found that both the rms errors and bias are of order 1 percent for the case of Planck and of order 10–20 percent for a 1 arcminute beam experiment. We conclude that for recovering lensing information from temperature data acquired by these experiments, the quadratic estimator is close to optimal, but further work will be required to determine whether this is also the case for lensing of the CMB polarization field

    Monte Carlo evaluation of the equilibrium isotope effects using the Takahashi-Imada factorization of the Feynman path integral

    Full text link
    The Feynman path integral approach for computing equilibrium isotope effects and isotope fractionation corrects the approximations made in standard methods, although at significantly increased computational cost. We describe an accelerated path integral approach based on three ingredients: the fourth- order Takahashi-Imada factorization of the path integral, thermodynamic integration with respect to mass, and centroid virial estimators for relevant free energy derivatives. While the frst ingredient speeds up convergence to the quantum limit, the second and third improve statistical convergence. The combined method is applied to compute the equilibrium constants for isotope exchange reactions H2+D=H+HD and H2+D2=2HD

    Reconstruction of lensing from the cosmic microwave background polarization

    Get PDF
    Gravitational lensing of the cosmic microwave background (CMB) polarization field has been recognized as a potentially valuable probe of the cosmological density field. We apply likelihood-based techniques to the problem of lensing of CMB polarization and show that if the B-mode polarization is mapped, then likelihood-based techniques allow significantly better lensing reconstruction than is possible using the previous quadratic estimator approach. With this method the ultimate limit to lensing reconstruction is not set by the lensed CMB power spectrum. Second-order corrections are known to produce a curl component of the lensing deflection field that cannot be described by a potential; we show that this does not significantly affect the reconstruction at noise levels greater than 0.25 microK arcmin. The reduction of the mean squared error in the lensing reconstruction relative to the quadratic method can be as much as a factor of two at noise levels of 1.4 microK arcmin to a factor of ten at 0.25 microK arcmin, depending on the angular scale of interest.Comment: matches PRD accepted version. 28 pages, 8 fig

    Efficient Optimization of Loops and Limits with Randomized Telescoping Sums

    Full text link
    We consider optimization problems in which the objective requires an inner loop with many steps or is the limit of a sequence of increasingly costly approximations. Meta-learning, training recurrent neural networks, and optimization of the solutions to differential equations are all examples of optimization problems with this character. In such problems, it can be expensive to compute the objective function value and its gradient, but truncating the loop or using less accurate approximations can induce biases that damage the overall solution. We propose randomized telescope (RT) gradient estimators, which represent the objective as the sum of a telescoping series and sample linear combinations of terms to provide cheap unbiased gradient estimates. We identify conditions under which RT estimators achieve optimization convergence rates independent of the length of the loop or the required accuracy of the approximation. We also derive a method for tuning RT estimators online to maximize a lower bound on the expected decrease in loss per unit of computation. We evaluate our adaptive RT estimators on a range of applications including meta-optimization of learning rates, variational inference of ODE parameters, and training an LSTM to model long sequences

    Low Complexity Regularization of Linear Inverse Problems

    Full text link
    Inverse problems and regularization theory is a central theme in contemporary signal processing, where the goal is to reconstruct an unknown signal from partial indirect, and possibly noisy, measurements of it. A now standard method for recovering the unknown signal is to solve a convex optimization problem that enforces some prior knowledge about its structure. This has proved efficient in many problems routinely encountered in imaging sciences, statistics and machine learning. This chapter delivers a review of recent advances in the field where the regularization prior promotes solutions conforming to some notion of simplicity/low-complexity. These priors encompass as popular examples sparsity and group sparsity (to capture the compressibility of natural signals and images), total variation and analysis sparsity (to promote piecewise regularity), and low-rank (as natural extension of sparsity to matrix-valued data). Our aim is to provide a unified treatment of all these regularizations under a single umbrella, namely the theory of partial smoothness. This framework is very general and accommodates all low-complexity regularizers just mentioned, as well as many others. Partial smoothness turns out to be the canonical way to encode low-dimensional models that can be linear spaces or more general smooth manifolds. This review is intended to serve as a one stop shop toward the understanding of the theoretical properties of the so-regularized solutions. It covers a large spectrum including: (i) recovery guarantees and stability to noise, both in terms of â„“2\ell^2-stability and model (manifold) identification; (ii) sensitivity analysis to perturbations of the parameters involved (in particular the observations), with applications to unbiased risk estimation ; (iii) convergence properties of the forward-backward proximal splitting scheme, that is particularly well suited to solve the corresponding large-scale regularized optimization problem

    High-order Path Integral Monte Carlo methods for solving quantum dot problems

    Get PDF
    The conventional second-order Path Integral Monte Carlo method is plagued with the sign problem in solving many-fermion systems. This is due to the large number of anti-symmetric free fermion propagators that are needed to extract the ground state wave function at large imaginary time. In this work, we show that optimized fourth-order Path Integral Monte Carlo methods, which use no more than 5 free-fermion propagators, can yield accurate quantum dot energies for up to 20 polarized electrons with the use of the Hamiltonian energy estimator.Comment: 14 pages, 4 figures, submitted to PRE - revised with a new figure and added larger N calculation
    • …
    corecore