35,908 research outputs found

    Multireference Alignment using Semidefinite Programming

    Full text link
    The multireference alignment problem consists of estimating a signal from multiple noisy shifted observations. Inspired by existing Unique-Games approximation algorithms, we provide a semidefinite program (SDP) based relaxation which approximates the maximum likelihood estimator (MLE) for the multireference alignment problem. Although we show that the MLE problem is Unique-Games hard to approximate within any constant, we observe that our poly-time approximation algorithm for the MLE appears to perform quite well in typical instances, outperforming existing methods. In an attempt to explain this behavior we provide stability guarantees for our SDP under a random noise model on the observations. This case is more challenging to analyze than traditional semi-random instances of Unique-Games: the noise model is on vertices of a graph and translates into dependent noise on the edges. Interestingly, we show that if certain positivity constraints in the SDP are dropped, its solution becomes equivalent to performing phase correlation, a popular method used for pairwise alignment in imaging applications. Finally, we show how symmetry reduction techniques from matrix representation theory can simplify the analysis and computation of the SDP, greatly decreasing its computational cost

    Singular random matrix decompositions: Jacobians.

    Get PDF
    For a singular random matrix Y, we find the Jacobians associated with the following decompositions; QR, Polar, Singular Value (SVD), L'U, L'DM and modified QR (QDR). Similarly, we find the Jacobinas of the following decompositions: Spectral, Cholesky's, L'DL and symmetric non-negative definite square root, of the cross-product matrix S = Y'Y

    The Lyapunov matrix equation. Matrix analysis from a computational perspective

    Full text link
    Decay properties of the solution XX to the Lyapunov matrix equation AX+XAT=DAX + X A^T = D are investigated. Their exploitation in the understanding of equation matrix properties, and in the development of new numerical solution strategies when DD is not low rank but possibly sparse is also briefly discussed.Comment: This work is a contribution to the Seminar series "Topics in Mathematics", of the PhD Program of the Mathematics Department, Universit\`a di Bologna, Ital

    Spectral Properties of Schr\"odinger Operators Arising in the Study of Quasicrystals

    Full text link
    We survey results that have been obtained for self-adjoint operators, and especially Schr\"odinger operators, associated with mathematical models of quasicrystals. After presenting general results that hold in arbitrary dimensions, we focus our attention on the one-dimensional case, and in particular on several key examples. The most prominent of these is the Fibonacci Hamiltonian, for which much is known by now and to which an entire section is devoted here. Other examples that are discussed in detail are given by the more general class of Schr\"odinger operators with Sturmian potentials. We put some emphasis on the methods that have been introduced quite recently in the study of these operators, many of them coming from hyperbolic dynamics. We conclude with a multitude of numerical calculations that illustrate the validity of the known rigorous results and suggest conjectures for further exploration.Comment: 56 page

    Kernel Analog Forecasting: Multiscale Test Problems

    Get PDF
    Data-driven prediction is becoming increasingly widespread as the volume of data available grows and as algorithmic development matches this growth. The nature of the predictions made, and the manner in which they should be interpreted, depends crucially on the extent to which the variables chosen for prediction are Markovian, or approximately Markovian. Multiscale systems provide a framework in which this issue can be analyzed. In this work kernel analog forecasting methods are studied from the perspective of data generated by multiscale dynamical systems. The problems chosen exhibit a variety of different Markovian closures, using both averaging and homogenization; furthermore, settings where scale-separation is not present and the predicted variables are non-Markovian, are also considered. The studies provide guidance for the interpretation of data-driven prediction methods when used in practice.Comment: 30 pages, 14 figures; clarified several ambiguous parts, added references, and a comparison with Lorenz' original method (Sec. 4.5

    The MM Alternative to EM

    Full text link
    The EM algorithm is a special case of a more general algorithm called the MM algorithm. Specific MM algorithms often have nothing to do with missing data. The first M step of an MM algorithm creates a surrogate function that is optimized in the second M step. In minimization, MM stands for majorize--minimize; in maximization, it stands for minorize--maximize. This two-step process always drives the objective function in the right direction. Construction of MM algorithms relies on recognizing and manipulating inequalities rather than calculating conditional expectations. This survey walks the reader through the construction of several specific MM algorithms. The potential of the MM algorithm in solving high-dimensional optimization and estimation problems is its most attractive feature. Our applications to random graph models, discriminant analysis and image restoration showcase this ability.Comment: Published in at http://dx.doi.org/10.1214/08-STS264 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Algorithmic and Statistical Perspectives on Large-Scale Data Analysis

    Full text link
    In recent years, ideas from statistics and scientific computing have begun to interact in increasingly sophisticated and fruitful ways with ideas from computer science and the theory of algorithms to aid in the development of improved worst-case algorithms that are useful for large-scale scientific and Internet data analysis problems. In this chapter, I will describe two recent examples---one having to do with selecting good columns or features from a (DNA Single Nucleotide Polymorphism) data matrix, and the other having to do with selecting good clusters or communities from a data graph (representing a social or information network)---that drew on ideas from both areas and that may serve as a model for exploiting complementary algorithmic and statistical perspectives in order to solve applied large-scale data analysis problems.Comment: 33 pages. To appear in Uwe Naumann and Olaf Schenk, editors, "Combinatorial Scientific Computing," Chapman and Hall/CRC Press, 201
    • …
    corecore