1,172 research outputs found
A multilevel approach for nonnegative matrix factorization
Nonnegative Matrix Factorization (NMF) is the problem of approximating a nonnegative matrix with the product of two low-rank nonnegative matrices and has been shown to be particularly useful in many applications, e.g., in text mining, image processing, computational biology, etc. In this paper, we explain how algorithms for NMF can be embedded into the framework of multi- level methods in order to accelerate their convergence. This technique can be applied in situations where data admit a good approximate representation in a lower dimensional space through linear transformations preserving nonnegativity. A simple multilevel strategy is described and is experi- mentally shown to speed up significantly three popular NMF algorithms (alternating nonnegative least squares, multiplicative updates and hierarchical alternating least squares) on several standard image datasets.nonnegative matrix factorization, algorithms, multigrid and multilevel methods, image processing
Multilevel Artificial Neural Network Training for Spatially Correlated Learning
Multigrid modeling algorithms are a technique used to accelerate relaxation
models running on a hierarchy of similar graphlike structures. We introduce and
demonstrate a new method for training neural networks which uses multilevel
methods. Using an objective function derived from a graph-distance metric, we
perform orthogonally-constrained optimization to find optimal prolongation and
restriction maps between graphs. We compare and contrast several methods for
performing this numerical optimization, and additionally present some new
theoretical results on upper bounds of this type of objective function. Once
calculated, these optimal maps between graphs form the core of Multiscale
Artificial Neural Network (MsANN) training, a new procedure we present which
simultaneously trains a hierarchy of neural network models of varying spatial
resolution. Parameter information is passed between members of this hierarchy
according to standard coarsening and refinement schedules from the multiscale
modelling literature. In our machine learning experiments, these models are
able to learn faster than default training, achieving a comparable level of
error in an order of magnitude fewer training examples.Comment: Manuscript (24 pages) and Supplementary Material (4 pages). Updated
January 2019 to reflect new formulation of MsANN structure and new training
procedur
- …