8 research outputs found

    Online low-rank representation learning for joint multi-subspace recovery and clustering

    Get PDF
    Benefiting from global rank constraints, the lowrank representation (LRR) method has been shown to be an effective solution to subspace learning. However, the global mechanism also means that the LRR model is not suitable for handling large-scale data or dynamic data. For large-scale data, the LRR method suffers from high time complexity, and for dynamic data, it has to recompute a complex rank minimization for the entire data set whenever new samples are dynamically added, making it prohibitively expensive. Existing attempts to online LRR either take a stochastic approach or build the representation purely based on a small sample set and treat new input as out-of-sample data. The former often requires multiple runs for good performance and thus takes longer time to run, and the latter formulates online LRR as an out-ofsample classification problem and is less robust to noise. In this paper, a novel online low-rank representation subspace learning method is proposed for both large-scale and dynamic data. The proposed algorithm is composed of two stages: static learning and dynamic updating. In the first stage, the subspace structure is learned from a small number of data samples. In the second stage, the intrinsic principal components of the entire data set are computed incrementally by utilizing the learned subspace structure, and the low-rank representation matrix can also be incrementally solved by an efficient online singular value decomposition (SVD) algorithm. The time complexity is reduced dramatically for large-scale data, and repeated computation is avoided for dynamic problems. We further perform theoretical analysis comparing the proposed online algorithm with the batch LRR method. Finally, experimental results on typical tasks of subspace recovery and subspace clustering show that the proposed algorithm performs comparably or better than batch methods including the batch LRR, and significantly outperforms state-of-the-art online methods

    Spoofing Detection in Automatic Speaker Verification Systems Using DNN Classifiers and Dynamic Acoustic Features

    Get PDF

    A new sparse representation framework for compressed sensing MRI

    Get PDF
    Abstract(#br)Compressed sensing based Magnetic Resonance imaging (MRI) via sparse representation (or transform) has recently attracted broad interest. The tight frame (TF)-based sparse representation is a promising approach in compressed sensing MRI. However, the conventional TF-based sparse representation is difficult to utilize the sparsity of the whole image. Since the whole image usually has different structure textures and a kind of tight frame can only represent a particular kind of ground object, how to reconstruct high-quality of magnetic resonance (MR) image is a challenge. In this work, we propose a new sparse representation framework, which fuses the double tight frame (DTF) into the mixed-norm regularization for MR image reconstruction from undersampled k -space data. In this framework, MR image is decomposed into smooth and nonsmooth regions. For the smooth regions, the wavelet TF-based weighted L 1 -norm regularization is developed to reconstruct piecewise-smooth information of image. For nonsmooth regions, we introduce the curvelet TF-based robust L 1 , a -norm regularization with the parameter to preserve the edge structural details and texture. To estimate the reasonable parameter, an adaptive parameter selection scheme is designed in robust L 1 , a -norm regularization. Experimental results demonstrate that the proposed method can achieve the best image reconstruction results when compared with other existing methods in terms of quantitative metrics and visual effect

    Connections Between Nuclear-Norm and Frobenius-Norm-Based Representations

    No full text
    corecore