58 research outputs found

    Omnidirectional Bats, Point-to-Plane Distances, and the Price of Uniqueness

    Get PDF
    We study simultaneous localization and mapping with a device that uses reflections to measure its distance from walls. Such a device can be realized acoustically with a synchronized collocated source and receiver; it behaves like a bat with no capacity for directional hearing or vocalizing. In this paper we generalize our previous work in 2D, and show that the 3D case is not just a simple extension, but rather a fundamentally different inverse problem. While generically the 2D problem has a unique solution, in 3D uniqueness is always absent in rooms with fewer than nine walls. In addition to the complete characterization of ambiguities which arise due to this non-uniqueness, we propose a robust solution for inexact measurements similar to analogous results for Euclidean Distance Matrices. Our theoretical results have important consequences for the design of collocated range-only SLAM systems, and we support them with an array of computer experiments.Comment: 5 pages, 8 figures, submitted to ICASSP 201

    Scalable and Robust Community Detection with Randomized Sketching

    Full text link
    This paper explores and analyzes the unsupervised clustering of large partially observed graphs. We propose a scalable and provable randomized framework for clustering graphs generated from the stochastic block model. The clustering is first applied to a sub-matrix of the graph's adjacency matrix associated with a reduced graph sketch constructed using random sampling. Then, the clusters of the full graph are inferred based on the clusters extracted from the sketch using a correlation-based retrieval step. Uniform random node sampling is shown to improve the computational complexity over clustering of the full graph when the cluster sizes are balanced. A new random degree-based node sampling algorithm is presented which significantly improves upon the performance of the clustering algorithm even when clusters are unbalanced. This algorithm improves the phase transitions for matrix-decomposition-based clustering with regard to computational complexity and minimum cluster size, which are shown to be nearly dimension-free in the low inter-cluster connectivity regime. A third sampling technique is shown to improve balance by randomly sampling nodes based on spatial distribution. We provide analysis and numerical results using a convex clustering algorithm based on matrix completion

    The achievable performance of convex demixing

    Get PDF
    Demixing is the problem of identifying multiple structured signals from a superimposed, undersampled, and noisy observation. This work analyzes a general framework, based on convex optimization, for solving demixing problems. When the constituent signals follow a generic incoherence model, this analysis leads to precise recovery guarantees. These results admit an attractive interpretation: each signal possesses an intrinsic degrees-of-freedom parameter, and demixing can succeed if and only if the dimension of the observation exceeds the total degrees of freedom present in the observation

    Information Recovery from Pairwise Measurements

    Full text link
    A variety of information processing tasks in practice involve recovering nn objects from single-shot graph-based measurements, particularly those taken over the edges of some measurement graph G\mathcal{G}. This paper concerns the situation where each object takes value over a group of MM different values, and where one is interested to recover all these values based on observations of certain pairwise relations over G\mathcal{G}. The imperfection of measurements presents two major challenges for information recovery: 1) inaccuracy\textit{inaccuracy}: a (dominant) portion 1p1-p of measurements are corrupted; 2) incompleteness\textit{incompleteness}: a significant fraction of pairs are unobservable, i.e. G\mathcal{G} can be highly sparse. Under a natural random outlier model, we characterize the minimax recovery rate\textit{minimax recovery rate}, that is, the critical threshold of non-corruption rate pp below which exact information recovery is infeasible. This accommodates a very general class of pairwise relations. For various homogeneous random graph models (e.g. Erdos Renyi random graphs, random geometric graphs, small world graphs), the minimax recovery rate depends almost exclusively on the edge sparsity of the measurement graph G\mathcal{G} irrespective of other graphical metrics. This fundamental limit decays with the group size MM at a square root rate before entering a connectivity-limited regime. Under the Erdos Renyi random graph, a tractable combinatorial algorithm is proposed to approach the limit for large MM (M=nΩ(1)M=n^{\Omega(1)}), while order-optimal recovery is enabled by semidefinite programs in the small MM regime. The extended (and most updated) version of this work can be found at (http://arxiv.org/abs/1504.01369).Comment: This version is no longer updated -- please find the latest version at (arXiv:1504.01369

    Convexity in source separation: Models, geometry, and algorithms

    Get PDF
    Source separation or demixing is the process of extracting multiple components entangled within a signal. Contemporary signal processing presents a host of difficult source separation problems, from interference cancellation to background subtraction, blind deconvolution, and even dictionary learning. Despite the recent progress in each of these applications, advances in high-throughput sensor technology place demixing algorithms under pressure to accommodate extremely high-dimensional signals, separate an ever larger number of sources, and cope with more sophisticated signal and mixing models. These difficulties are exacerbated by the need for real-time action in automated decision-making systems. Recent advances in convex optimization provide a simple framework for efficiently solving numerous difficult demixing problems. This article provides an overview of the emerging field, explains the theory that governs the underlying procedures, and surveys algorithms that solve them efficiently. We aim to equip practitioners with a toolkit for constructing their own demixing algorithms that work, as well as concrete intuition for why they work
    corecore