305,774 research outputs found
Fast Parallel Randomized Algorithm for Nonnegative Matrix Factorization with KL Divergence for Large Sparse Datasets
Nonnegative Matrix Factorization (NMF) with Kullback-Leibler Divergence
(NMF-KL) is one of the most significant NMF problems and equivalent to
Probabilistic Latent Semantic Indexing (PLSI), which has been successfully
applied in many applications. For sparse count data, a Poisson distribution and
KL divergence provide sparse models and sparse representation, which describe
the random variation better than a normal distribution and Frobenius norm.
Specially, sparse models provide more concise understanding of the appearance
of attributes over latent components, while sparse representation provides
concise interpretability of the contribution of latent components over
instances. However, minimizing NMF with KL divergence is much more difficult
than minimizing NMF with Frobenius norm; and sparse models, sparse
representation and fast algorithms for large sparse datasets are still
challenges for NMF with KL divergence. In this paper, we propose a fast
parallel randomized coordinate descent algorithm having fast convergence for
large sparse datasets to archive sparse models and sparse representation. The
proposed algorithm's experimental results overperform the current studies' ones
in this problem
Toward a Robust Sparse Data Representation for Wireless Sensor Networks
Compressive sensing has been successfully used for optimized operations in
wireless sensor networks. However, raw data collected by sensors may be neither
originally sparse nor easily transformed into a sparse data representation.
This paper addresses the problem of transforming source data collected by
sensor nodes into a sparse representation with a few nonzero elements. Our
contributions that address three major issues include: 1) an effective method
that extracts population sparsity of the data, 2) a sparsity ratio guarantee
scheme, and 3) a customized learning algorithm of the sparsifying dictionary.
We introduce an unsupervised neural network to extract an intrinsic sparse
coding of the data. The sparse codes are generated at the activation of the
hidden layer using a sparsity nomination constraint and a shrinking mechanism.
Our analysis using real data samples shows that the proposed method outperforms
conventional sparsity-inducing methods.Comment: 8 page
- …