103 research outputs found

    Sparse general non-negative matrix factorization based on left semi-tensor product

    Get PDF
    The dimension reduction of large scale high-dimensional data is a challenging task, especially the dimension reduction of face data and the accuracy increment of face recognition in the large scale face recognition system, which may cause large storage space and long recognition time. In order to further reduce the recognition time and the storage space in the large scale face recognition systems, on the basis of the general non-negative matrix factorization based on left semi-tensor (GNMFL) without dimension matching constraints proposed in our previous work, we propose a sparse GNMFL/L (SGNMFL/L) to decompose a large number of face data sets in the large scale face recognition systems, which makes the decomposed base matrix sparser and suppresses the decomposed coefficient matrix. Therefore, the dimension of the basis matrix and the coefficient matrix can be further reduced. Two sets of experiments are conducted to show the effectiveness of the proposed SGNMFL/L on two databases. The experiments are mainly designed to verify the effects of two hyper-parameters on the sparseness of basis matrix factorized by SGNMFL/L, compare the performance of the conventional NMF, sparse NMF (SNMF), GNMFL, and the proposed SGNMFL/L in terms of storage space and time efficiency, and compare their face recognition accuracies with different noises. Both the theoretical derivation and the experimental results show that the proposed SGNMF/L can effectively save the storage space and reduce the computation time while achieving high recognition accuracy and has strong robustness

    Distributed Unmixing of Hyperspectral Data With Sparsity Constraint

    Full text link
    Spectral unmixing (SU) is a data processing problem in hyperspectral remote sensing. The significant challenge in the SU problem is how to identify endmembers and their weights, accurately. For estimation of signature and fractional abundance matrices in a blind problem, nonnegative matrix factorization (NMF) and its developments are used widely in the SU problem. One of the constraints which was added to NMF is sparsity constraint that was regularized by L 1/2 norm. In this paper, a new algorithm based on distributed optimization has been used for spectral unmixing. In the proposed algorithm, a network including single-node clusters has been employed. Each pixel in hyperspectral images considered as a node in this network. The distributed unmixing with sparsity constraint has been optimized with diffusion LMS strategy, and then the update equations for fractional abundance and signature matrices are obtained. Simulation results based on defined performance metrics, illustrate advantage of the proposed algorithm in spectral unmixing of hyperspectral data compared with other methods. The results show that the AAD and SAD of the proposed approach are improved respectively about 6 and 27 percent toward distributed unmixing in SNR=25dB.Comment: 6 pages, conference pape

    Incremental Graph Regulated Nonnegative Matrix Factorization for Face Recognition

    Get PDF
    In a real world application, we seldom get all images at one time. Considering this case, if a company hired an employee, all his images information needs to be recorded into the system; if we rerun the face recognition algorithm, it will be time consuming. To address this problem, In this paper, firstly, we proposed a novel subspace incremental method called incremental graph regularized nonnegative matrix factorization (IGNMF) algorithm which imposes manifold into incremental nonnegative matrix factorization algorithm (INMF); thus, our new algorithm is able to preserve the geometric structure in the data under incremental study framework; secondly, considering we always get many face images belonging to one person or many different people as a batch, we improved our IGNMF algorithms to Batch-IGNMF algorithms (B-IGNMF), which implements incremental study in batches. Experiments show that (1) the recognition rate of our IGNMF and B-IGNMF algorithms is close to GNMF algorithm while it runs faster than GNMF. (2) The running times of our IGNMF and B-IGNMF algorithms are close to INMF while the recognition rate outperforms INMF. (3) Comparing with other popular NMF-based face recognition incremental algorithms, our IGNMF and B-IGNMF also outperform then both the recognition rate and the running time

    Sparse feature learning for image analysis in segmentation, classification, and disease diagnosis.

    Get PDF
    The success of machine learning algorithms generally depends on intermediate data representation, called features that disentangle the hidden factors of variation in data. Moreover, machine learning models are required to be generalized, in order to reduce the specificity or bias toward the training dataset. Unsupervised feature learning is useful in taking advantage of large amount of unlabeled data, which is available to capture these variations. However, learned features are required to capture variational patterns in data space. In this dissertation, unsupervised feature learning with sparsity is investigated for sparse and local feature extraction with application to lung segmentation, interpretable deep models, and Alzheimer\u27s disease classification. Nonnegative Matrix Factorization, Autoencoder and 3D Convolutional Autoencoder are used as architectures or models for unsupervised feature learning. They are investigated along with nonnegativity, sparsity and part-based representation constraints for generalized and transferable feature extraction
    corecore