120 research outputs found
Non-negative matrix factorization for medical imaging
A non-negative matrix factorization approach to dimensionality reduction is proposed to aid classification of images. The original images can be stored as lower-dimensional columns of a matrix that hold degrees of belonging to feature components, so they can be used in the training phase of the classification at lower runtime and without loss in accuracy. The extracted features can be visually examined and images reconstructed with limited error. The proof of concept is performed on a benchmark of handwritten digits, followed by the application to histopathological colorectal cancer slides. Results are encouraging, though dealing with real-world medical data raises a number of issues.Universidad de Málaga. Campus de Excelencia Internacional AndalucÃa Tec
Recovering Multiple Nonnegative Time Series From a Few Temporal Aggregates
Motivated by electricity consumption metering, we extend existing nonnegative
matrix factorization (NMF) algorithms to use linear measurements as
observations, instead of matrix entries. The objective is to estimate multiple
time series at a fine temporal scale from temporal aggregates measured on each
individual series. Furthermore, our algorithm is extended to take into account
individual autocorrelation to provide better estimation, using a recent convex
relaxation of quadratically constrained quadratic program. Extensive
experiments on synthetic and real-world electricity consumption datasets
illustrate the effectiveness of our matrix recovery algorithms
Fast Parallel Randomized Algorithm for Nonnegative Matrix Factorization with KL Divergence for Large Sparse Datasets
Nonnegative Matrix Factorization (NMF) with Kullback-Leibler Divergence
(NMF-KL) is one of the most significant NMF problems and equivalent to
Probabilistic Latent Semantic Indexing (PLSI), which has been successfully
applied in many applications. For sparse count data, a Poisson distribution and
KL divergence provide sparse models and sparse representation, which describe
the random variation better than a normal distribution and Frobenius norm.
Specially, sparse models provide more concise understanding of the appearance
of attributes over latent components, while sparse representation provides
concise interpretability of the contribution of latent components over
instances. However, minimizing NMF with KL divergence is much more difficult
than minimizing NMF with Frobenius norm; and sparse models, sparse
representation and fast algorithms for large sparse datasets are still
challenges for NMF with KL divergence. In this paper, we propose a fast
parallel randomized coordinate descent algorithm having fast convergence for
large sparse datasets to archive sparse models and sparse representation. The
proposed algorithm's experimental results overperform the current studies' ones
in this problem
PNNL - Using Matrix and Tensor Factorization for Analyzing Radiation Transport Data
The Disruptive Technologies Group in the National Security Directorate of Pacific Northwest National Laboratory teamed up with students at Embry-Riddle Aeronautical University on a research project that aims to develop quantitative methods for characterizing features in radiation transport simulation data and comparing features across different computational approaches. Understanding how radiation particles are transported throughout a system and interact with shielding is extremely computationally expensive. Reduced order models (ROMs) can be used to significantly increase the speed of these calculations. This project focuses on analysis of the simulated radiation transport for Cobalt-60, Cesium-137, and Technetium-99. A ROM may be developed from several formalisms and then analyzing the feature vectors of each. The methods considered here include principal component analysis (PCA), non-negative matrix factorization (NNMF), and CP tensor decomposition (CPT). By comparing the signal from fitted Lorentzian profiles to spectral features, we evaluate whether each ROM is capable of accurately displaying the radiation signal traces in the data
Block-Simultaneous Direction Method of Multipliers: A proximal primal-dual splitting algorithm for nonconvex problems with multiple constraints
We introduce a generalization of the linearized Alternating Direction Method
of Multipliers to optimize a real-valued function of multiple arguments
with potentially multiple constraints on each of them. The function
may be nonconvex as long as it is convex in every argument, while the
constraints need to be convex but not smooth. If is smooth, the
proposed Block-Simultaneous Direction Method of Multipliers (bSDMM) can be
interpreted as a proximal analog to inexact coordinate descent methods under
constraints. Unlike alternative approaches for joint solvers of
multiple-constraint problems, we do not require linear operators of a
constraint function to be invertible or linked between each
other. bSDMM is well-suited for a range of optimization problems, in particular
for data analysis, where is the likelihood function of a model and
could be a transformation matrix describing e.g. finite differences or basis
transforms. We apply bSDMM to the Non-negative Matrix Factorization task of a
hyperspectral unmixing problem and demonstrate convergence and effectiveness of
multiple constraints on both matrix factors. The algorithms are implemented in
python and released as an open-source package.Comment: 13 pages, 4 figure
- …