1,483 research outputs found

    Low-rank matrix recovery with structural incoherence for robust face recognition

    Full text link
    We address the problem of robust face recognition, in which both training and test image data might be corrupted due to occlusion and disguise. From standard face recog-nition algorithms such as Eigenfaces to recently proposed sparse representation-based classification (SRC) methods, most prior works did not consider possible contamination of data during training, and thus the associated performance might be degraded. Based on the recent success of low-rank matrix recovery, we propose a novel low-rank matrix ap-proximation algorithm with structural incoherence for ro-bust face recognition. Our method not only decomposes raw training data into a set of representative basis with corre-sponding sparse errors for better modeling the face images, we further advocate the structural incoherence between the basis learned from different classes. These basis are en-couraged to be as independent as possible due to the regu-larization on structural incoherence. We show that this pro-vides additional discriminating ability to the original low-rank models for improved performance. Experimental re-sults on public face databases verify the effectiveness and robustness of our method, which is also shown to outper-form state-of-the-art SRC based approaches. 1

    Completing Low-Rank Matrices with Corrupted Samples from Few Coefficients in General Basis

    Full text link
    Subspace recovery from corrupted and missing data is crucial for various applications in signal processing and information theory. To complete missing values and detect column corruptions, existing robust Matrix Completion (MC) methods mostly concentrate on recovering a low-rank matrix from few corrupted coefficients w.r.t. standard basis, which, however, does not apply to more general basis, e.g., Fourier basis. In this paper, we prove that the range space of an m×nm\times n matrix with rank rr can be exactly recovered from few coefficients w.r.t. general basis, though rr and the number of corrupted samples are both as high as O(min{m,n}/log3(m+n))O(\min\{m,n\}/\log^3 (m+n)). Our model covers previous ones as special cases, and robust MC can recover the intrinsic matrix with a higher rank. Moreover, we suggest a universal choice of the regularization parameter, which is λ=1/logn\lambda=1/\sqrt{\log n}. By our 2,1\ell_{2,1} filtering algorithm, which has theoretical guarantees, we can further reduce the computational cost of our model. As an application, we also find that the solutions to extended robust Low-Rank Representation and to our extended robust MC are mutually expressible, so both our theory and algorithm can be applied to the subspace clustering problem with missing values under certain conditions. Experiments verify our theories.Comment: To appear in IEEE Transactions on Information Theor

    Sparse Support Matrix Machines for the Classification of Corrupted Data

    Full text link
    University of Technology Sydney. Faculty of Engineering and Information Technology.Support matrix machine is fragile to the presence of outliers: even few corrupted data points can arbitrarily alter the quality of the approximation, What if a fraction of columns are corrupted? In real world, the data is noisy and most of the features may be redundant as well as may be useless, which in turn affect the classification performance. Thus, it is important to perform robust feature selection under robust metric learning to filter out redundant features and ignore the noisy data points for more interpretable modelling. To overcome this challenge, in this work, we propose a new model to address the classification problem of high dimensionality data by jointly optimizing the both regularizer and hinge loss. We combine the hinge loss and regularization terms as spectral elastic net penalty. The regularization term which promotes the structural sparsity and shares similar sparsity patterns across multiple predictors. It is a spectral extension of the conventional elastic net that combines the property of low-rank and joint sparsity together, to deal with complex high dimensional noisy data. We further extends this approach by combining the recovery along with feature selection and classification could significantly improve the performance based on the assumption that the data consists of a low rank clean matrix plus a sparse noise matrix. We perform matrix recovery, feature selection and classification through joint minimization of p,q-norm and nuclear norm under the incoherence and ambiguity conditions and able to recover intrinsic matrix of higher rank and recover data with much denser corruption. Although, above both methods takes full advantage of low rank assumption to exploit the strong correlation between columns and rows of each matrix and able to extract useful features, however, are originally built for binary classification problems. To improve the robustness against data that is rich in outliers, we further extend this problem and present a novel multiclass support matrix machine by utilizing the maximization of the inter-class margins (i.e. margins between pairs of classes). We demonstrate the significance and advantage of our methods on different available benchmark datasets such as person identification, face recognition and EEG classification. Results showed that our methods achieved significantly better performance both in terms of time and accuracy for solving the classification problem of highly correlated matrix data as compared to state-of-the-art methods

    Non-convex Optimization for Machine Learning

    Full text link
    A vast majority of machine learning algorithms train their models and perform inference by solving optimization problems. In order to capture the learning and prediction problems accurately, structural constraints such as sparsity or low rank are frequently imposed or else the objective itself is designed to be a non-convex function. This is especially true of algorithms that operate in high-dimensional spaces or that train non-linear models such as tensor models and deep networks. The freedom to express the learning problem as a non-convex optimization problem gives immense modeling power to the algorithm designer, but often such problems are NP-hard to solve. A popular workaround to this has been to relax non-convex problems to convex ones and use traditional methods to solve the (convex) relaxed optimization problems. However this approach may be lossy and nevertheless presents significant challenges for large scale optimization. On the other hand, direct approaches to non-convex optimization have met with resounding success in several domains and remain the methods of choice for the practitioner, as they frequently outperform relaxation-based techniques - popular heuristics include projected gradient descent and alternating minimization. However, these are often poorly understood in terms of their convergence and other properties. This monograph presents a selection of recent advances that bridge a long-standing gap in our understanding of these heuristics. The monograph will lead the reader through several widely used non-convex optimization techniques, as well as applications thereof. The goal of this monograph is to both, introduce the rich literature in this area, as well as equip the reader with the tools and techniques needed to analyze these simple procedures for non-convex problems.Comment: The official publication is available from now publishers via http://dx.doi.org/10.1561/220000005

    A Supervised Low-Rank Matrix Decomposition for Matching

    Get PDF
    Human identification from images captured in unconstrained scenarios is still an unsolved problem, which finds applications in several areas, ranging from all the settings typical of video surveillance, to robotics, metadata enrichment of social media content, and mobile applications. The most recent approaches rely on techniques such as sparse coding and low-rank matrix decomposition. Those build a generative representation of the data that on the one hand, attempts capturing all the information descriptive of an identity; on the other hand, training and testing are complex to allow those algorithms to be robust against grossly corrupted data, which are typical of unconstrained scenarios.;This thesis introduces a novel low-rank modeling framework for human identification. The approach is supervised, gives up developing a generative representation, and focuses on learning the subspace of nuisance factors, responsible for data corruption. The goal of the model is to learn how to project data onto the orthogonal complement of the nuisance factor subspace, where data become invariant to nuisance factors, thus enabling the use of simple geometry to cope with unwanted corruptions and efficiently do classification. The proposed approach inherently promotes class separation and is computationally efficient, especially at testing time. It has been evaluated for doing face recognition with grossly corrupted training and testing data, obtaining very promising results. The approach has also been challenged with a person re-identification experiment, showing results comparable with the state-of-the-art
    corecore