82 research outputs found

    Error analysis for denoising smooth modulo signals on a graph

    Get PDF
    In many applications, we are given access to noisy modulo samples of a smooth function with the goal being to robustly unwrap the samples, i.e., to estimate the original samples of the function. In a recent work, Cucuringu and Tyagi proposed denoising the modulo samples by first representing them on the unit complex circle and then solving a smoothness regularized least squares problem -- the smoothness measured w.r.t the Laplacian of a suitable proximity graph GG -- on the product manifold of unit circles. This problem is a quadratically constrained quadratic program (QCQP) which is nonconvex, hence they proposed solving its sphere-relaxation leading to a trust region subproblem (TRS). In terms of theoretical guarantees, â„“2\ell_2 error bounds were derived for (TRS). These bounds are however weak in general and do not really demonstrate the denoising performed by (TRS). In this work, we analyse the (TRS) as well as an unconstrained relaxation of (QCQP). For both these estimators we provide a refined analysis in the setting of Gaussian noise and derive noise regimes where they provably denoise the modulo observations w.r.t the â„“2\ell_2 norm. The analysis is performed in a general setting where GG is any connected graph.Comment: 36 pages, 2 figures. Added Section 5 (Simulations) and made minor changes as per reviewers comment

    Tangent space estimation for smooth embeddings of Riemannian manifolds

    Get PDF
    Numerous dimensionality reduction problems in data analysis involve the recovery of low-dimensional models or the learning of manifolds underlying sets of data. Many manifold learning methods require the estimation of the tangent space of the manifold at a point from locally available data samples. Local sampling conditions such as (i) the size of the neighborhood (sampling width) and (ii) the number of samples in the neighborhood (sampling density) affect the performance of learning algorithms. In this work, we propose a theoretical analysis of local sampling conditions for the estimation of the tangent space at a point P lying on a m-dimensional Riemannian manifold S in R^n. Assuming a smooth embedding of S in R^n, we estimate the tangent space T_P S by performing a Principal Component Analysis (PCA) on points sampled from the neighborhood of P on S. Our analysis explicitly takes into account the second order properties of the manifold at P, namely the principal curvatures as well as the higher order terms. We consider a random sampling framework and leverage recent results from random matrix theory to derive conditions on the sampling width and the local sampling density for an accurate estimation of tangent subspaces. We measure the estimation accuracy by the angle between the estimated tangent space and the true tangent space T_P S and we give conditions for this angle to be bounded with high probability. In particular, we observe that the local sampling conditions are highly dependent on the correlation between the components in the second-order local approximation of the manifold. We finally provide numerical simulations to validate our theoretical findings

    On denoising modulo 1 samples of a function

    Get PDF
    Consider an unknown smooth function f:[0,1]→Rf: [0,1] \rightarrow \mathbb{R}, and say we are given nn noisymod  1\mod 1 samples of ff, i.e., yi=(f(xi)+ηi)mod  1y_i = (f(x_i) + \eta_i)\mod 1 for xi∈[0,1]x_i \in [0,1], where ηi\eta_i denotes noise. Given the samples (xi,yi)i=1n(x_i,y_i)_{i=1}^{n} our goal is to recover smooth, robust estimates of the clean samples f(xi) mod 1f(x_i) \bmod 1. We formulate a natural approach for solving this problem which works with representations of mod 1 values over the unit circle. This amounts to solving a quadratically constrained quadratic program (QCQP) with non-convex constraints involving points lying on the unit circle. Our proposed approach is based on solving its relaxation which is a trust-region sub-problem, and hence solvable efficiently. We demonstrate its robustness to noise % of our approach via extensive simulations on several synthetic examples, and provide a detailed theoretical analysis.Comment: 19 pages, 13 figures. To appear in AISTATS 2018. Corrected typos, and made minor stylistic changes throughout. Main results unchanged. Added section I (and Figure 13) in appendi

    Multi-kernel unmixing and super-resolution using the Modified Matrix Pencil method

    Get PDF
    Consider L groups of point sources or spike trains, with the l'th group represented by xl(t)x_l (t). For a function g:R→Rg : R → R, let gl(t)=g(t/µl)g_l (t) = g(t/µ_l) denote a point spread function with scale µl>0µ_l > 0, and with µ1<⋅⋅⋅<µLµ_1 < · · · < µ_L. With y(t)=∑l=1L(gl∗xl)(t)y(t) = \sum_{l=1}^{L} (g_l * x_l)(t), our goal is to recover the source parameters given samples of y, or given the Fourier samples of y. This problem is a generalization of the usual super-resolution setup wherein L=1L = 1; we call this the multi-kernel unmixing super-resolution problem. Assuming access to Fourier samples of y, we derive an algorithm for this problem for estimating the source parameters of each group, along with precise non-asymptotic guarantees. Our approach involves estimating the group parameters sequentially in the order of increasing scale parameters, i.e., from group 1 to L. In particular, the estimation process at stage 1≤l≤L1 ≤ l ≤ L involves (i) carefully sampling the tail of the Fourier transform of y, (ii) a deflation step wherein we subtract the contribution of the groups processed thus far from the obtained Fourier samples, and (iii) applying Moitra's modified Matrix Pencil method on a deconvolved version of the samples in (ii)

    Learning linear dynamical systems under convex constraints

    Full text link
    We consider the problem of identification of linear dynamical systems from a single trajectory. Recent results have predominantly focused on the setup where no structural assumption is made on the system matrix A∗∈Rn×nA^* \in \mathbb{R}^{n \times n}, and have consequently analyzed the ordinary least squares (OLS) estimator in detail. We assume prior structural information on A∗A^* is available, which can be captured in the form of a convex set K\mathcal{K} containing A∗A^*. For the solution of the ensuing constrained least squares estimator, we derive non-asymptotic error bounds in the Frobenius norm which depend on the local size of the tangent cone of K\mathcal{K} at A∗A^*. To illustrate the usefulness of this result, we instantiate it for the settings where, (i) K\mathcal{K} is a dd dimensional subspace of Rn×n\mathbb{R}^{n \times n}, or (ii) A∗A^* is kk-sparse and K\mathcal{K} is a suitably scaled ℓ1\ell_1 ball. In the regimes where d,k≪n2d, k \ll n^2, our bounds improve upon those obtained from the OLS estimator.Comment: 17 page
    • …
    corecore