985 research outputs found

    Convex recovery from interferometric measurements

    Get PDF
    This note formulates a deterministic recovery result for vectors xx from quadratic measurements of the form (Ax)i(Ax)j‾(Ax)_i \overline{(Ax)_j} for some left-invertible AA. Recovery is exact, or stable in the noisy case, when the couples (i,j)(i,j) are chosen as edges of a well-connected graph. One possible way of obtaining the solution is as a feasible point of a simple semidefinite program. Furthermore, we show how the proportionality constant in the error estimate depends on the spectral gap of a data-weighted graph Laplacian. Such quadratic measurements have found applications in phase retrieval, angular synchronization, and more recently interferometric waveform inversion

    Fast Graph Laplacian regularized kernel learning via semidefinite-quadratic-linear programming.

    Get PDF
    Wu, Xiaoming.Thesis (M.Phil.)--Chinese University of Hong Kong, 2011.Includes bibliographical references (p. 30-34).Abstracts in English and Chinese.Abstract --- p.iAcknowledgement --- p.ivChapter 1 --- Introduction --- p.1Chapter 2 --- Preliminaries --- p.4Chapter 2.1 --- Kernel Learning Theory --- p.4Chapter 2.1.1 --- Positive Semidefinite Kernel --- p.4Chapter 2.1.2 --- The Reproducing Kernel Map --- p.6Chapter 2.1.3 --- Kernel Tricks --- p.7Chapter 2.2 --- Spectral Graph Theory --- p.8Chapter 2.2.1 --- Graph Laplacian --- p.8Chapter 2.2.2 --- Eigenvectors of Graph Laplacian --- p.9Chapter 2.3 --- Convex Optimization --- p.10Chapter 2.3.1 --- From Linear to Conic Programming --- p.11Chapter 2.3.2 --- Second-Order Cone Programming --- p.12Chapter 2.3.3 --- Semidefinite Programming --- p.12Chapter 3 --- Fast Graph Laplacian Regularized Kernel Learning --- p.14Chapter 3.1 --- The Problems --- p.14Chapter 3.1.1 --- MVU --- p.16Chapter 3.1.2 --- PCP --- p.17Chapter 3.1.3 --- Low-Rank Approximation: from SDP to QSDP --- p.18Chapter 3.2 --- Previous Approach: from QSDP to SDP --- p.20Chapter 3.3 --- Our Formulation: from QSDP to SQLP --- p.21Chapter 3.4 --- Experimental Results --- p.23Chapter 3.4.1 --- The Results --- p.25Chapter 4 --- Conclusion --- p.28Bibliography --- p.3

    Distributed Maximum Likelihood Sensor Network Localization

    Full text link
    We propose a class of convex relaxations to solve the sensor network localization problem, based on a maximum likelihood (ML) formulation. This class, as well as the tightness of the relaxations, depends on the noise probability density function (PDF) of the collected measurements. We derive a computational efficient edge-based version of this ML convex relaxation class and we design a distributed algorithm that enables the sensor nodes to solve these edge-based convex programs locally by communicating only with their close neighbors. This algorithm relies on the alternating direction method of multipliers (ADMM), it converges to the centralized solution, it can run asynchronously, and it is computation error-resilient. Finally, we compare our proposed distributed scheme with other available methods, both analytically and numerically, and we argue the added value of ADMM, especially for large-scale networks

    Approximate Computation and Implicit Regularization for Very Large-scale Data Analysis

    Full text link
    Database theory and database practice are typically the domain of computer scientists who adopt what may be termed an algorithmic perspective on their data. This perspective is very different than the more statistical perspective adopted by statisticians, scientific computers, machine learners, and other who work on what may be broadly termed statistical data analysis. In this article, I will address fundamental aspects of this algorithmic-statistical disconnect, with an eye to bridging the gap between these two very different approaches. A concept that lies at the heart of this disconnect is that of statistical regularization, a notion that has to do with how robust is the output of an algorithm to the noise properties of the input data. Although it is nearly completely absent from computer science, which historically has taken the input data as given and modeled algorithms discretely, regularization in one form or another is central to nearly every application domain that applies algorithms to noisy data. By using several case studies, I will illustrate, both theoretically and empirically, the nonobvious fact that approximate computation, in and of itself, can implicitly lead to statistical regularization. This and other recent work suggests that, by exploiting in a more principled way the statistical properties implicit in worst-case algorithms, one can in many cases satisfy the bicriteria of having algorithms that are scalable to very large-scale databases and that also have good inferential or predictive properties.Comment: To appear in the Proceedings of the 2012 ACM Symposium on Principles of Database Systems (PODS 2012
    • …
    corecore