679 research outputs found

    On the matrix square root via geometric optimization

    Full text link
    This paper is triggered by the preprint "\emph{Computing Matrix Squareroot via Non Convex Local Search}" by Jain et al. (\textit{\textcolor{blue}{arXiv:1507.05854}}), which analyzes gradient-descent for computing the square root of a positive definite matrix. Contrary to claims of~\citet{jain2015}, our experiments reveal that Newton-like methods compute matrix square roots rapidly and reliably, even for highly ill-conditioned matrices and without requiring commutativity. We observe that gradient-descent converges very slowly primarily due to tiny step-sizes and ill-conditioning. We derive an alternative first-order method based on geodesic convexity: our method admits a transparent convergence analysis (<1< 1 page), attains linear rate, and displays reliable convergence even for rank deficient problems. Though superior to gradient-descent, ultimately our method is also outperformed by a well-known scaled Newton method. Nevertheless, the primary value of our work is its conceptual value: it shows that for deriving gradient based methods for the matrix square root, \emph{the manifold geometric view of positive definite matrices can be much more advantageous than the Euclidean view}.Comment: 8 pages, 12 plots, this version contains several more references and more words about the rank-deficient cas

    Randomized Rounding for the Largest Simplex Problem

    Full text link
    The maximum volume jj-simplex problem asks to compute the jj-dimensional simplex of maximum volume inside the convex hull of a given set of nn points in Qd\mathbb{Q}^d. We give a deterministic approximation algorithm for this problem which achieves an approximation ratio of ej/2+o(j)e^{j/2 + o(j)}. The problem is known to be NP\mathrm{NP}-hard to approximate within a factor of cjc^{j} for some constant c>1c > 1. Our algorithm also gives a factor ej+o(j)e^{j + o(j)} approximation for the problem of finding the principal j×jj\times j submatrix of a rank dd positive semidefinite matrix with the largest determinant. We achieve our approximation by rounding solutions to a generalization of the DD-optimal design problem, or, equivalently, the dual of an appropriate smallest enclosing ellipsoid problem. Our arguments give a short and simple proof of a restricted invertibility principle for determinants

    04401 Abstracts Collection -- Algorithms and Complexity for Continuous

    Get PDF
    From 26.09.04 to 01.10.04, the Dagstuhl Seminar ``Algorithms and Complexity for Continuous Problems\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Compressive Wave Computation

    Full text link
    This paper considers large-scale simulations of wave propagation phenomena. We argue that it is possible to accurately compute a wavefield by decomposing it onto a largely incomplete set of eigenfunctions of the Helmholtz operator, chosen at random, and that this provides a natural way of parallelizing wave simulations for memory-intensive applications. This paper shows that L1-Helmholtz recovery makes sense for wave computation, and identifies a regime in which it is provably effective: the one-dimensional wave equation with coefficients of small bounded variation. Under suitable assumptions we show that the number of eigenfunctions needed to evolve a sparse wavefield defined on N points, accurately with very high probability, is bounded by C log(N) log(log(N)), where C is related to the desired accuracy and can be made to grow at a much slower rate than N when the solution is sparse. The PDE estimates that underlie this result are new to the authors' knowledge and may be of independent mathematical interest; they include an L1 estimate for the wave equation, an estimate of extension of eigenfunctions, and a bound for eigenvalue gaps in Sturm-Liouville problems. Numerical examples are presented in one spatial dimension and show that as few as 10 percents of all eigenfunctions can suffice for accurate results. Finally, we argue that the compressive viewpoint suggests a competitive parallel algorithm for an adjoint-state inversion method in reflection seismology.Comment: 45 pages, 4 figure

    Finding Significant Fourier Coefficients: Clarifications, Simplifications, Applications and Limitations

    Get PDF
    Ideas from Fourier analysis have been used in cryptography for the last three decades. Akavia, Goldwasser and Safra unified some of these ideas to give a complete algorithm that finds significant Fourier coefficients of functions on any finite abelian group. Their algorithm stimulated a lot of interest in the cryptography community, especially in the context of `bit security'. This manuscript attempts to be a friendly and comprehensive guide to the tools and results in this field. The intended readership is cryptographers who have heard about these tools and seek an understanding of their mechanics and their usefulness and limitations. A compact overview of the algorithm is presented with emphasis on the ideas behind it. We show how these ideas can be extended to a `modulus-switching' variant of the algorithm. We survey some applications of this algorithm, and explain that several results should be taken in the right context. In particular, we point out that some of the most important bit security problems are still open. Our original contributions include: a discussion of the limitations on the usefulness of these tools; an answer to an open question about the modular inversion hidden number problem

    Theoretical and Experimental Analysis of a Randomized Algorithm for Sparse Fourier Transform Analysis

    Full text link
    We analyze a sublinear RAlSFA (Randomized Algorithm for Sparse Fourier Analysis) that finds a near-optimal B-term Sparse Representation R for a given discrete signal S of length N, in time and space poly(B,log(N)), following the approach given in \cite{GGIMS}. Its time cost poly(log(N)) should be compared with the superlinear O(N log N) time requirement of the Fast Fourier Transform (FFT). A straightforward implementation of the RAlSFA, as presented in the theoretical paper \cite{GGIMS}, turns out to be very slow in practice. Our main result is a greatly improved and practical RAlSFA. We introduce several new ideas and techniques that speed up the algorithm. Both rigorous and heuristic arguments for parameter choices are presented. Our RAlSFA constructs, with probability at least 1-delta, a near-optimal B-term representation R in time poly(B)log(N)log(1/delta)/ epsilon^{2} log(M) such that ||S-R||^{2}<=(1+epsilon)||S-R_{opt}||^{2}. Furthermore, this RAlSFA implementation already beats the FFTW for not unreasonably large N. We extend the algorithm to higher dimensional cases both theoretically and numerically. The crossover point lies at N=70000 in one dimension, and at N=900 for data on a N*N grid in two dimensions for small B signals where there is noise.Comment: 21 pages, 8 figures, submitted to Journal of Computational Physic

    Sampling-based proofs of almost-periodicity results and algorithmic applications

    Full text link
    We give new combinatorial proofs of known almost-periodicity results for sumsets of sets with small doubling in the spirit of Croot and Sisask, whose almost-periodicity lemma has had far-reaching implications in additive combinatorics. We provide an alternative (and L^p-norm free) point of view, which allows for proofs to easily be converted to probabilistic algorithms that decide membership in almost-periodic sumsets of dense subsets of F_2^n. As an application, we give a new algorithmic version of the quasipolynomial Bogolyubov-Ruzsa lemma recently proved by Sanders. Together with the results by the last two authors, this implies an algorithmic version of the quadratic Goldreich-Levin theorem in which the number of terms in the quadratic Fourier decomposition of a given function is quasipolynomial in the error parameter, compared with an exponential dependence previously proved by the authors. It also improves the running time of the algorithm to have quasipolynomial dependence instead of an exponential one. We also give an application to the problem of finding large subspaces in sumsets of dense sets. Green showed that the sumset of a dense subset of F_2^n contains a large subspace. Using Fourier analytic methods, Sanders proved that such a subspace must have dimension bounded below by a constant times the density times n. We provide an alternative (and L^p norm-free) proof of a comparable bound, which is analogous to a recent result of Croot, Laba and Sisask in the integers.Comment: 28 page
    • …
    corecore