52,635 research outputs found

    Recursive numerical calculus of one-loop tensor integrals

    Get PDF
    A numerical approach to compute tensor integrals in one-loop calculations is presented. The algorithm is based on a recursion relation which allows to express high rank tensor integrals as a function of lower rank ones. At each level of iteration only inverse square roots of Gram determinants appear. For the phase-space regions where Gram determinants are so small that numerical problems are expected, we give general prescriptions on how to construct reliable approximations to the exact result without performing Taylor expansions. Working in 4+epsilon dimensions does not require an analytic separation of ultraviolet and infrared/collinear divergences, and, apart from trivial integrals that we compute explicitly, no additional ones besides the standard set of scalar one-loop integrals are needed.Comment: Typo corrected in formula 79. 22 pages, Latex, 1 figure, uses axodraw.st

    An approximate empirical Bayesian method for large-scale linear-Gaussian inverse problems

    Full text link
    We study Bayesian inference methods for solving linear inverse problems, focusing on hierarchical formulations where the prior or the likelihood function depend on unspecified hyperparameters. In practice, these hyperparameters are often determined via an empirical Bayesian method that maximizes the marginal likelihood function, i.e., the probability density of the data conditional on the hyperparameters. Evaluating the marginal likelihood, however, is computationally challenging for large-scale problems. In this work, we present a method to approximately evaluate marginal likelihood functions, based on a low-rank approximation of the update from the prior covariance to the posterior covariance. We show that this approximation is optimal in a minimax sense. Moreover, we provide an efficient algorithm to implement the proposed method, based on a combination of the randomized SVD and a spectral approximation method to compute square roots of the prior covariance matrix. Several numerical examples demonstrate good performance of the proposed method

    New Square-Root Factorization of Inverse Toeplitz Matrices

    Get PDF
    Abstract-Square-root (in particular, Cholesky) factorization of Toeplitz matrices and of their inverses is a classical area of research. The Schur algorithm yields directly the Cholesky factorization of a symmetric Toeplitz matrix, whereas the Levinson algorithm does the same for the inverse matrix. The objective of this letter is to use results from the theory of rational orthonormal functions to derive square-root factorizations of the inverse of an n × n positive definite Toeplitz matrix. The main result is a new factorization based on the Takenaka-Malmquist functions, that is parameterized by the roots of the corresponding auto-regressive polynomial of order n. We will also discuss briefly the connection between our analysis and some classical results such as Schur polynomials and the Gohberg-Semencul inversion formula

    Root finding with threshold circuits

    Get PDF
    We show that for any constant d, complex roots of degree d univariate rational (or Gaussian rational) polynomials---given by a list of coefficients in binary---can be computed to a given accuracy by a uniform TC^0 algorithm (a uniform family of constant-depth polynomial-size threshold circuits). The basic idea is to compute the inverse function of the polynomial by a power series. We also discuss an application to the theory VTC^0 of bounded arithmetic.Comment: 19 pages, 1 figur

    On Taking Square Roots without Quadratic Nonresidues over Finite Fields

    Full text link
    We present a novel idea to compute square roots over finite fields, without being given any quadratic nonresidue, and without assuming any unproven hypothesis. The algorithm is deterministic and the proof is elementary. In some cases, the square root algorithm runs in O~(log2q)\tilde{O}(\log^2 q) bit operations over finite fields with qq elements. As an application, we construct a deterministic primality proving algorithm, which runs in O~(log3N)\tilde{O}(\log^3 N) for some integers NN.Comment: 14 page

    New Structured Matrix Methods for Real and Complex Polynomial Root-finding

    Full text link
    We combine the known methods for univariate polynomial root-finding and for computations in the Frobenius matrix algebra with our novel techniques to advance numerical solution of a univariate polynomial equation, and in particular numerical approximation of the real roots of a polynomial. Our analysis and experiments show efficiency of the resulting algorithms.Comment: 18 page

    flatIGW - an inverse algorithm to compute the Density of States of lattice Self Avoiding Walks

    Full text link
    We show that the Density of States (DoS) for lattice Self Avoiding Walks can be estimated by using an inverse algorithm, called flatIGW, whose step-growth rules are dynamically adjusted by requiring the energy histogram to be locally flat. Here, the (attractive) energy associated with a configuration is taken to be proportional to the number of non-bonded nearest neighbor pairs (contacts). The energy histogram is able to explicitly direct the growth of a walk because the step-growth rule of the Interacting Growth Walk \cite{IGW} samples the available nearest neighbor sites according to the number of contacts they would make. We have obtained the complex Fisher zeros corresponding to the DoS, estimated for square lattice walks of various lengths, and located the θ\theta temperature by extrapolating the finite size values of the real zeros to their asymptotic value, 1.49\sim 1.49 (reasonably close to the known value, 1.50\sim 1.50 \cite{barkema}).Comment: 18 pages, 7 eps figures; parts of the manuscript are rewritten so as to improve clarity of presentation; an extra reference adde

    On the matrix square root via geometric optimization

    Full text link
    This paper is triggered by the preprint "\emph{Computing Matrix Squareroot via Non Convex Local Search}" by Jain et al. (\textit{\textcolor{blue}{arXiv:1507.05854}}), which analyzes gradient-descent for computing the square root of a positive definite matrix. Contrary to claims of~\citet{jain2015}, our experiments reveal that Newton-like methods compute matrix square roots rapidly and reliably, even for highly ill-conditioned matrices and without requiring commutativity. We observe that gradient-descent converges very slowly primarily due to tiny step-sizes and ill-conditioning. We derive an alternative first-order method based on geodesic convexity: our method admits a transparent convergence analysis (<1< 1 page), attains linear rate, and displays reliable convergence even for rank deficient problems. Though superior to gradient-descent, ultimately our method is also outperformed by a well-known scaled Newton method. Nevertheless, the primary value of our work is its conceptual value: it shows that for deriving gradient based methods for the matrix square root, \emph{the manifold geometric view of positive definite matrices can be much more advantageous than the Euclidean view}.Comment: 8 pages, 12 plots, this version contains several more references and more words about the rank-deficient cas
    corecore