17,928 research outputs found

    An LS-Decomposition Approach for Robust Data Recovery in Wireless Sensor Networks

    Full text link
    Wireless sensor networks are widely adopted in military, civilian and commercial applications, which fuels an exponential explosion of sensory data. However, a major challenge to deploy effective sensing systems is the presence of {\em massive missing entries, measurement noise, and anomaly readings}. Existing works assume that sensory data matrices have low-rank structures. This does not hold in reality due to anomaly readings, causing serious performance degradation. In this paper, we introduce an {\em LS-Decomposition} approach for robust sensory data recovery, which decomposes a corrupted data matrix as the superposition of a low-rank matrix and a sparse anomaly matrix. First, we prove that LS-Decomposition solves a convex program with bounded approximation error. Second, using data sets from the IntelLab, GreenOrbs, and NBDC-CTD projects, we find that sensory data matrices contain anomaly readings. Third, we propose an accelerated proximal gradient algorithm and prove that it approximates the optimal solution with convergence rate O(1/k2)O(1/k^2) (kk is the number of iterations). Evaluations on real data sets show that our scheme achieves recovery error ≤5%\leq 5\% for sampling rate ≥50%\geq 50\% and almost exact recovery for sampling rate ≥60%\geq 60\%, while state-of-the-art methods have error 10%∼15%10\% \sim 15\% at sampling rate 90%90\%.Comment: 24 pages, 8 figure

    Low-Rank Approximation and Completion of Positive Tensors

    Full text link
    Unlike the matrix case, computing low-rank approximations of tensors is NP-hard and numerically ill-posed in general. Even the best rank-1 approximation of a tensor is NP-hard. In this paper, we use convex optimization to develop polynomial-time algorithms for low-rank approximation and completion of positive tensors. Our approach is to use algebraic topology to define a new (numerically well-posed) decomposition for positive tensors, which we show is equivalent to the standard tensor decomposition in important cases. Though computing this decomposition is a nonconvex optimization problem, we prove it can be exactly reformulated as a convex optimization problem. This allows us to construct polynomial-time randomized algorithms for computing this decomposition and for solving low-rank tensor approximation problems. Among the consequences is that best rank-1 approximations of positive tensors can be computed in polynomial time. Our framework is next extended to the tensor completion problem, where noisy entries of a tensor are observed and then used to estimate missing entries. We provide a polynomial-time algorithm that for specific cases requires a polynomial (in tensor order) number of measurements, in contrast to existing approaches that require an exponential number of measurements. These algorithms are extended to exploit sparsity in the tensor to reduce the number of measurements needed. We conclude by providing a novel interpretation of statistical regression problems with categorical variables as tensor completion problems, and numerical examples with synthetic data and data from a bioengineered metabolic network show the improved performance of our approach on this problem

    Missing Slice Recovery for Tensors Using a Low-rank Model in Embedded Space

    Full text link
    Let us consider a case where all of the elements in some continuous slices are missing in tensor data. In this case, the nuclear-norm and total variation regularization methods usually fail to recover the missing elements. The key problem is capturing some delay/shift-invariant structure. In this study, we consider a low-rank model in an embedded space of a tensor. For this purpose, we extend a delay embedding for a time series to a "multi-way delay-embedding transform" for a tensor, which takes a given incomplete tensor as the input and outputs a higher-order incomplete Hankel tensor. The higher-order tensor is then recovered by Tucker-based low-rank tensor factorization. Finally, an estimated tensor can be obtained by using the inverse multi-way delay embedding transform of the recovered higher-order tensor. Our experiments showed that the proposed method successfully recovered missing slices for some color images and functional magnetic resonance images.Comment: accepted for CVPR201

    Non-Convex Weighted Lp Nuclear Norm based ADMM Framework for Image Restoration

    Full text link
    Since the matrix formed by nonlocal similar patches in a natural image is of low rank, the nuclear norm minimization (NNM) has been widely used in various image processing studies. Nonetheless, nuclear norm based convex surrogate of the rank function usually over-shrinks the rank components and makes different components equally, and thus may produce a result far from the optimum. To alleviate the above-mentioned limitations of the nuclear norm, in this paper we propose a new method for image restoration via the non-convex weighted Lp nuclear norm minimization (NCW-NNM), which is able to more accurately enforce the image structural sparsity and self-similarity simultaneously. To make the proposed model tractable and robust, the alternative direction multiplier method (ADMM) is adopted to solve the associated non-convex minimization problem. Experimental results on various types of image restoration problems, including image deblurring, image inpainting and image compressive sensing (CS) recovery, demonstrate that the proposed method outperforms many current state-of-the-art methods in both the objective and the perceptual qualities.Comment: arXiv admin note: text overlap with arXiv:1611.0898

    Simple and practical algorithms for â„“p\ell_p-norm low-rank approximation

    Full text link
    We propose practical algorithms for entrywise ℓp\ell_p-norm low-rank approximation, for p=1p = 1 or p=∞p = \infty. The proposed framework, which is non-convex and gradient-based, is easy to implement and typically attains better approximations, faster, than state of the art. From a theoretical standpoint, we show that the proposed scheme can attain (1+ε)(1 + \varepsilon)-OPT approximations. Our algorithms are not hyperparameter-free: they achieve the desiderata only assuming algorithm's hyperparameters are known a priori---or are at least approximable. I.e., our theory indicates what problem quantities need to be known, in order to get a good solution within polynomial time, and does not contradict to recent inapproximabilty results, as in [46].Comment: 16 pages, 11 figures, to appear in UAI 201

    Flexible Low-Rank Statistical Modeling with Side Information

    Full text link
    We propose a general framework for reduced-rank modeling of matrix-valued data. By applying a generalized nuclear norm penalty we can directly model low-dimensional latent variables associated with rows and columns. Our framework flexibly incorporates row and column features, smoothing kernels, and other sources of side information by penalizing deviations from the row and column models. Moreover, a large class of these models can be estimated scalably using convex optimization. The computational bottleneck in each case is one singular value decomposition per iteration of a large but easy-to-apply matrix. Our framework generalizes traditional convex matrix completion and multi-task learning methods as well as maximum a posteriori estimation under a large class of popular hierarchical Bayesian models.Comment: 20 pages, 4 figure

    Sparse Generalized Principal Component Analysis for Large-scale Applications beyond Gaussianity

    Full text link
    Principal Component Analysis (PCA) is a dimension reduction technique. It produces inconsistent estimators when the dimensionality is moderate to high, which is often the problem in modern large-scale applications where algorithm scalability and model interpretability are difficult to achieve, not to mention the prevalence of missing values. While existing sparse PCA methods alleviate inconsistency, they are constrained to the Gaussian assumption of classical PCA and fail to address algorithm scalability issues. We generalize sparse PCA to the broad exponential family distributions under high-dimensional setup, with built-in treatment for missing values. Meanwhile we propose a family of iterative sparse generalized PCA (SG-PCA) algorithms such that despite the non-convexity and non-smoothness of the optimization task, the loss function decreases in every iteration. In terms of ease and intuitive parameter tuning, our sparsity-inducing regularization is far superior to the popular Lasso. Furthermore, to promote overall scalability, accelerated gradient is integrated for fast convergence, while a progressive screening technique gradually squeezes out nuisance dimensions of a large-scale problem for feasible optimization. High-dimensional simulation and real data experiments demonstrate the efficiency and efficacy of SG-PCA

    Static and Dynamic Robust PCA and Matrix Completion: A Review

    Full text link
    Principal Components Analysis (PCA) is one of the most widely used dimension reduction techniques. Robust PCA (RPCA) refers to the problem of PCA when the data may be corrupted by outliers. Recent work by Cand{\`e}s, Wright, Li, and Ma defined RPCA as a problem of decomposing a given data matrix into the sum of a low-rank matrix (true data) and a sparse matrix (outliers). The column space of the low-rank matrix then gives the PCA solution. This simple definition has lead to a large amount of interesting new work on provably correct, fast, and practical solutions to RPCA. More recently, the dynamic (time-varying) version of the RPCA problem has been studied and a series of provably correct, fast, and memory efficient tracking solutions have been proposed. Dynamic RPCA (or robust subspace tracking) is the problem of tracking data lying in a (slowly) changing subspace while being robust to sparse outliers. This article provides an exhaustive review of the last decade of literature on RPCA and its dynamic counterpart (robust subspace tracking), along with describing their theoretical guarantees, discussing the pros and cons of various approaches, and providing empirical comparisons of performance and speed. A brief overview of the (low-rank) matrix completion literature is also provided (the focus is on works not discussed in other recent reviews). This refers to the problem of completing a low-rank matrix when only a subset of its entries are observed. It can be interpreted as a simpler special case of RPCA in which the indices of the outlier corrupted entries are known.Comment: To appear in Proceedings of the IEEE, Special Issue on Rethinking PCA for Modern Datasets. arXiv admin note: text overlap with arXiv:1711.0949

    Introduction to Nonnegative Matrix Factorization

    Full text link
    In this paper, we introduce and provide a short overview of nonnegative matrix factorization (NMF). Several aspects of NMF are discussed, namely, the application in hyperspectral imaging, geometry and uniqueness of NMF solutions, complexity, algorithms, and its link with extended formulations of polyhedra. In order to put NMF into perspective, the more general problem class of constrained low-rank matrix approximation problems is first briefly introduced.Comment: 18 pages, 4 figure

    A Novel Approach to Quantized Matrix Completion Using Huber Loss Measure

    Full text link
    In this paper, we introduce a novel and robust approach to Quantized Matrix Completion (QMC). First, we propose a rank minimization problem with constraints induced by quantization bounds. Next, we form an unconstrained optimization problem by regularizing the rank function with Huber loss. Huber loss is leveraged to control the violation from quantization bounds due to two properties: 1- It is differentiable, 2- It is less sensitive to outliers than the quadratic loss. A Smooth Rank Approximation is utilized to endorse lower rank on the genuine data matrix. Thus, an unconstrained optimization problem with differentiable objective function is obtained allowing us to advantage from Gradient Descent (GD) technique. Novel and firm theoretical analysis on problem model and convergence of our algorithm to the global solution are provided. Another contribution of our work is that our method does not require projections or initial rank estimation unlike the state- of-the-art. In the Numerical Experiments Section, the noticeable outperformance of our proposed method in learning accuracy and computational complexity compared to those of the state-of- the-art literature methods is illustrated as the main contribution
    • …
    corecore