26,184 research outputs found

    A Fast Algorithm for a Weighted Low Rank Approximation

    Full text link
    Matrix low rank approximation including the classical PCA and the robust PCA (RPCA) method have been applied to solve the background modeling problem in video analysis. Recently, it has been demonstrated that a special weighted low rank approximation of matrices can be made robust to the outliers similar to the 1\ell_1-norm in RPCA method. In this work, we propose a new algorithm that can speed up the existing algorithm for solving the special weighted low rank approximation and demonstrate its use in background estimation problem

    Weighted Low-Rank Approximation of Matrices and Background Modeling

    Get PDF
    We primarily study a special a weighted low-rank approximation of matrices and then apply it to solve the background modeling problem. We propose two algorithms for this purpose: one operates in the batch mode on the entire data and the other one operates in the batch-incremental mode on the data and naturally captures more background variations and computationally more effective. Moreover, we propose a robust technique that learns the background frame indices from the data and does not require any training frames. We demonstrate through extensive experiments that by inserting a simple weight in the Frobenius norm, it can be made robust to the outliers similar to the 1\ell_1 norm. Our methods match or outperform several state-of-the-art online and batch background modeling methods in virtually all quantitative and qualitative measures.Comment: arXiv admin note: text overlap with arXiv:1707.0028

    Weighted Low Rank Approximation for Background Estimation Problems

    Full text link
    Classical principal component analysis (PCA) is not robust to the presence of sparse outliers in the data. The use of the 1\ell_1 norm in the Robust PCA (RPCA) method successfully eliminates the weakness of PCA in separating the sparse outliers. In this paper, by sticking a simple weight to the Frobenius norm, we propose a weighted low rank (WLR) method to avoid the often computationally expensive algorithms relying on the 1\ell_1 norm. As a proof of concept, a background estimation model has been presented and compared with two 1\ell_1 norm minimization algorithms. We illustrate that as long as a simple weight matrix is inferred from the data, one can use the weighted Frobenius norm and achieve the same or better performance

    On a Problem of Weighted Low-Rank Approximation of Matrices

    Full text link
    We study a weighted low rank approximation that is inspired by a problem of constrained low rank approximation of matrices as initiated by the work of Golub, Hoffman, and Stewart (Linear Algebra and Its Applications, 88-89(1987), 317-327). Our results reduce to that of Golub, Hoffman, and Stewart in the limiting cases. We also propose an algorithm based on the alternating direction method to solve our weighted low rank approximation problem and compare it with the state-of-art general algorithms such as the weighted total alternating least squares and the EM algorithm

    Weighted Schatten pp-Norm Minimization for Image Denoising and Background Subtraction

    Full text link
    Low rank matrix approximation (LRMA), which aims to recover the underlying low rank matrix from its degraded observation, has a wide range of applications in computer vision. The latest LRMA methods resort to using the nuclear norm minimization (NNM) as a convex relaxation of the nonconvex rank minimization. However, NNM tends to over-shrink the rank components and treats the different rank components equally, limiting its flexibility in practical applications. We propose a more flexible model, namely the Weighted Schatten pp-Norm Minimization (WSNM), to generalize the NNM to the Schatten pp-norm minimization with weights assigned to different singular values. The proposed WSNM not only gives better approximation to the original low-rank assumption, but also considers the importance of different rank components. We analyze the solution of WSNM and prove that, under certain weights permutation, WSNM can be equivalently transformed into independent non-convex lpl_p-norm subproblems, whose global optimum can be efficiently solved by generalized iterated shrinkage algorithm. We apply WSNM to typical low-level vision problems, e.g., image denoising and background subtraction. Extensive experimental results show, both qualitatively and quantitatively, that the proposed WSNM can more effectively remove noise, and model complex and dynamic scenes compared with state-of-the-art methods.Comment: 13 pages, 11 figure
    corecore