8,632 research outputs found
Semi and weighted semi-nonnegative matrix factorization : comparative study
Orientador: Jacques WainerDissertação (mestrado) - Universidade Estadual de Campinas, Instituto de ComputaçãoResumo: Algoritmos que envolvem fatoração de matrizes tem sido objeto de intensos estudos nos anos recentes, gerando uma ampla variedade de técnicas e aplicações para diversos tipos de problemas. Dada uma matriz de dados de entrada X, a forma mais simples do problema de fatoração de matrizes pode ser definido como a tarefa de encontrar as matrizes F e G, usualmente com posto baixo, tal que X ~ FG. São consideradas duas variações principais do problema de fatoração de matrizes: a fatoração de matrizes semi-não-negativa (Semi Nonnegative Matrix Factorization (SNMF) ), que requer que a matriz G seja não-negativa, e a fatoração de matrizes semi-não-negativa com pesos ( Weighted Nonnegative Matriz Factorization(WSNMF) ), que lida adicionalmente com casos onde há dados de entrada faltantes ou incertos. Essa dissertação tem como principal objetivo comparar diferentes algoritmos e estratégias para resolver esses problemas, focando em duas estratégias principais: Mínimos Quadrados Alternado com Restrição Constrained Alternating Least Squares e Atualização Multiplicativa Multiplicative UpdatesAbstract: Algorithms that involve matrix factorization have been the object of intense study in the recent years, generating a wide range of techniques and applications for many different problems. Given an input data matrix X, the simplest matrix factorization problem can be defined as the task to find matrices F and G, usually of low rank, such that X ? F G. I consider two different variations of the matrix factorization problem, the Semi- Nonnegative Matrix Factorization, which requires the matrix G to be nonnegative, and the Weighted Semi-Nonnegative Matrix Factorization, which deals additionally with cases where the input data has missing or uncertain values. This dissertation aims to compare different algorithms and strategies to solve these problems, focusing on two main strategies: Constrained Alternating Least Squares and Multiplicative UpdatesMestradoCiência da ComputaçãoMestre em Ciência da ComputaçãoCAPE
Two Algorithms for Orthogonal Nonnegative Matrix Factorization with Application to Clustering
Approximate matrix factorization techniques with both nonnegativity and
orthogonality constraints, referred to as orthogonal nonnegative matrix
factorization (ONMF), have been recently introduced and shown to work
remarkably well for clustering tasks such as document classification. In this
paper, we introduce two new methods to solve ONMF. First, we show athematical
equivalence between ONMF and a weighted variant of spherical k-means, from
which we derive our first method, a simple EM-like algorithm. This also allows
us to determine when ONMF should be preferred to k-means and spherical k-means.
Our second method is based on an augmented Lagrangian approach. Standard ONMF
algorithms typically enforce nonnegativity for their iterates while trying to
achieve orthogonality at the limit (e.g., using a proper penalization term or a
suitably chosen search direction). Our method works the opposite way:
orthogonality is strictly imposed at each step while nonnegativity is
asymptotically obtained, using a quadratic penalty. Finally, we show that the
two proposed approaches compare favorably with standard ONMF algorithms on
synthetic, text and image data sets.Comment: 17 pages, 8 figures. New numerical experiments (document and
synthetic data sets
Bi-Objective Nonnegative Matrix Factorization: Linear Versus Kernel-Based Models
Nonnegative matrix factorization (NMF) is a powerful class of feature
extraction techniques that has been successfully applied in many fields, namely
in signal and image processing. Current NMF techniques have been limited to a
single-objective problem in either its linear or nonlinear kernel-based
formulation. In this paper, we propose to revisit the NMF as a multi-objective
problem, in particular a bi-objective one, where the objective functions
defined in both input and feature spaces are taken into account. By taking the
advantage of the sum-weighted method from the literature of multi-objective
optimization, the proposed bi-objective NMF determines a set of nondominated,
Pareto optimal, solutions instead of a single optimal decomposition. Moreover,
the corresponding Pareto front is studied and approximated. Experimental
results on unmixing real hyperspectral images confirm the efficiency of the
proposed bi-objective NMF compared with the state-of-the-art methods
Missing Spectrum-Data Recovery in Cognitive Radio Networks Using Piecewise Constant Nonnegative Matrix Factorization
In this paper, we propose a missing spectrum data recovery technique for
cognitive radio (CR) networks using Nonnegative Matrix Factorization (NMF). It
is shown that the spectrum measurements collected from secondary users (SUs)
can be factorized as product of a channel gain matrix times an activation
matrix. Then, an NMF method with piecewise constant activation coefficients is
introduced to analyze the measurements and estimate the missing spectrum data.
The proposed optimization problem is solved by a Majorization-Minimization
technique. The numerical simulation verifies that the proposed technique is
able to accurately estimate the missing spectrum data in the presence of noise
and fading.Comment: 6 pages, 6 figures, Accepted for presentation in MILCOM'15 Conferenc
Low-Rank Matrix Approximation with Weights or Missing Data is NP-hard
Weighted low-rank approximation (WLRA), a dimensionality reduction technique
for data analysis, has been successfully used in several applications, such as
in collaborative filtering to design recommender systems or in computer vision
to recover structure from motion. In this paper, we study the computational
complexity of WLRA and prove that it is NP-hard to find an approximate
solution, even when a rank-one approximation is sought. Our proofs are based on
a reduction from the maximum-edge biclique problem, and apply to strictly
positive weights as well as binary weights (the latter corresponding to
low-rank matrix approximation with missing data).Comment: Proof of Lemma 4 (Lemma 3 in v1) has been corrected. Some remarks and
comments have been added. Accepted in SIAM Journal on Matrix Analysis and
Application
- …