47,486 research outputs found

    Weighted Low Rank Approximation for Background Estimation Problems

    Full text link
    Classical principal component analysis (PCA) is not robust to the presence of sparse outliers in the data. The use of the β„“1\ell_1 norm in the Robust PCA (RPCA) method successfully eliminates the weakness of PCA in separating the sparse outliers. In this paper, by sticking a simple weight to the Frobenius norm, we propose a weighted low rank (WLR) method to avoid the often computationally expensive algorithms relying on the β„“1\ell_1 norm. As a proof of concept, a background estimation model has been presented and compared with two β„“1\ell_1 norm minimization algorithms. We illustrate that as long as a simple weight matrix is inferred from the data, one can use the weighted Frobenius norm and achieve the same or better performance

    A Batch-Incremental Video Background Estimation Model using Weighted Low-Rank Approximation of Matrices

    Get PDF
    Principal component pursuit (PCP) is a state-of-the-art approach for background estimation problems. Due to their higher computational cost, PCP algorithms, such as robust principal component analysis (RPCA) and its variants, are not feasible in processing high definition videos. To avoid the curse of dimensionality in those algorithms, several methods have been proposed to solve the background estimation problem in an incremental manner. We propose a batch-incremental background estimation model using a special weighted low-rank approximation of matrices. Through experiments with real and synthetic video sequences, we demonstrate that our method is superior to the state-of-the-art background estimation algorithms such as GRASTA, ReProCS, incPCP, and GFL

    Weighted Low-Rank Approximation of Matrices:Some Analytical and Numerical Aspects

    Get PDF
    This dissertation addresses some analytical and numerical aspects of a problem of weighted low-rank approximation of matrices. We propose and solve two different versions of weighted low-rank approximation problems. We demonstrate, in addition, how these formulations can be efficiently used to solve some classic problems in computer vision. We also present the superior performance of our algorithms over the existing state-of-the-art unweighted and weighted low-rank approximation algorithms. Classical principal component analysis (PCA) is constrained to have equal weighting on the elements of the matrix, which might lead to a degraded design in some problems. To address this fundamental flaw in PCA, Golub, Hoffman, and Stewart proposed and solved a problem of constrained low-rank approximation of matrices: For a given matrix A=(A1β€…β€ŠA2)A = (A_1\;A_2), find a low rank matrix X=(A1β€…β€ŠX2)X = (A_1\;X_2) such that rank(X){\rm rank}(X) is less than rr, a prescribed bound, and βˆ₯Aβˆ’Xβˆ₯\|A-X\| is small.~Motivated by the above formulation, we propose a weighted low-rank approximation problem that generalizes the constrained low-rank approximation problem of Golub, Hoffman and Stewart.~We study a general framework obtained by pointwise multiplication with the weight matrix and consider the following problem:~For a given matrix A∈RmΓ—nA\in\mathbb{R}^{m\times n} solve: \begin{eqnarray*}\label{weighted problem} \min_{\substack{X}}\|\left(A-X\right)\odot W\|_F^2~{\rm subject~to~}{\rm rank}(X)\le r, \end{eqnarray*} where βŠ™\odot denotes the pointwise multiplication and βˆ₯β‹…βˆ₯F\|\cdot\|_F is the Frobenius norm of matrices. In the first part, we study a special version of the above general weighted low-rank approximation problem.~Instead of using pointwise multiplication with the weight matrix, we use the regular matrix multiplication and replace the rank constraint by its convex surrogate, the nuclear norm, and consider the following problem: \begin{eqnarray*}\label{weighted problem 1} \hat{X} &=& \arg \min_X \{\frac{1}{2}\|(A-X)W\|_F^2 +\tau\|X\|_\ast\}, \end{eqnarray*} where βˆ₯β‹…βˆ₯βˆ—\|\cdot\|_* denotes the nuclear norm of XX.~Considering its resemblance with the classic singular value thresholding problem we call it the weighted singular value thresholding~(WSVT)~problem.~As expected,~the WSVT problem has no closed form analytical solution in general,~and a numerical procedure is needed to solve it.~We introduce auxiliary variables and apply simple and fast alternating direction method to solve WSVT numerically.~Moreover, we present a convergence analysis of the algorithm and propose a mechanism for estimating the weight from the data.~We demonstrate the performance of WSVT on two computer vision applications:~background estimation from video sequences~and facial shadow removal.~In both cases,~WSVT shows superior performance to all other models traditionally used. In the second part, we study the general framework of the proposed problem.~For the special case of weight, we study the limiting behavior of the solution to our problem,~both analytically and numerically.~In the limiting case of weights,~as (W_1)_{ij}\to\infty, W_2=\mathbbm{1}, a matrix of 1,~we show the solutions to our weighted problem converge, and the limit is the solution to the constrained low-rank approximation problem of Golub et. al. Additionally, by asymptotic analysis of the solution to our problem,~we propose a rate of convergence.~By doing this, we make explicit connections between a vast genre of weighted and unweighted low-rank approximation problems.~In addition to these, we devise a novel and efficient numerical algorithm based on the alternating direction method for the special case of weight and present a detailed convergence analysis.~Our approach improves substantially over the existing weighted low-rank approximation algorithms proposed in the literature.~Finally, we explore the use of our algorithm to real-world problems in a variety of domains, such as computer vision and machine learning. Finally, for a special family of weights, we demonstrate an interesting property of the solution to the general weighted low-rank approximation problem. Additionally, we devise two accelerated algorithms by using this property and present their effectiveness compared to the algorithm proposed in Chapter 4

    A Fast Algorithm for a Weighted Low Rank Approximation

    Full text link
    Matrix low rank approximation including the classical PCA and the robust PCA (RPCA) method have been applied to solve the background modeling problem in video analysis. Recently, it has been demonstrated that a special weighted low rank approximation of matrices can be made robust to the outliers similar to the β„“1\ell_1-norm in RPCA method. In this work, we propose a new algorithm that can speed up the existing algorithm for solving the special weighted low rank approximation and demonstrate its use in background estimation problem

    On a Problem of Weighted Low-Rank Approximation of Matrices

    Full text link
    We study a weighted low rank approximation that is inspired by a problem of constrained low rank approximation of matrices as initiated by the work of Golub, Hoffman, and Stewart (Linear Algebra and Its Applications, 88-89(1987), 317-327). Our results reduce to that of Golub, Hoffman, and Stewart in the limiting cases. We also propose an algorithm based on the alternating direction method to solve our weighted low rank approximation problem and compare it with the state-of-art general algorithms such as the weighted total alternating least squares and the EM algorithm

    Restricted strong convexity and weighted matrix completion: Optimal bounds with noise

    Full text link
    We consider the matrix completion problem under a form of row/column weighted entrywise sampling, including the case of uniform entrywise sampling as a special case. We analyze the associated random observation operator, and prove that with high probability, it satisfies a form of restricted strong convexity with respect to weighted Frobenius norm. Using this property, we obtain as corollaries a number of error bounds on matrix completion in the weighted Frobenius norm under noisy sampling and for both exact and near low-rank matrices. Our results are based on measures of the "spikiness" and "low-rankness" of matrices that are less restrictive than the incoherence conditions imposed in previous work. Our technique involves an MM-estimator that includes controls on both the rank and spikiness of the solution, and we establish non-asymptotic error bounds in weighted Frobenius norm for recovering matrices lying with β„“q\ell_q-"balls" of bounded spikiness. Using information-theoretic methods, we show that no algorithm can achieve better estimates (up to a logarithmic factor) over these same sets, showing that our conditions on matrices and associated rates are essentially optimal

    Weighted Schatten pp-Norm Minimization for Image Denoising and Background Subtraction

    Full text link
    Low rank matrix approximation (LRMA), which aims to recover the underlying low rank matrix from its degraded observation, has a wide range of applications in computer vision. The latest LRMA methods resort to using the nuclear norm minimization (NNM) as a convex relaxation of the nonconvex rank minimization. However, NNM tends to over-shrink the rank components and treats the different rank components equally, limiting its flexibility in practical applications. We propose a more flexible model, namely the Weighted Schatten pp-Norm Minimization (WSNM), to generalize the NNM to the Schatten pp-norm minimization with weights assigned to different singular values. The proposed WSNM not only gives better approximation to the original low-rank assumption, but also considers the importance of different rank components. We analyze the solution of WSNM and prove that, under certain weights permutation, WSNM can be equivalently transformed into independent non-convex lpl_p-norm subproblems, whose global optimum can be efficiently solved by generalized iterated shrinkage algorithm. We apply WSNM to typical low-level vision problems, e.g., image denoising and background subtraction. Extensive experimental results show, both qualitatively and quantitatively, that the proposed WSNM can more effectively remove noise, and model complex and dynamic scenes compared with state-of-the-art methods.Comment: 13 pages, 11 figure
    • …
    corecore