31,003 research outputs found
Predefined Sparseness in Recurrent Sequence Models
Inducing sparseness while training neural networks has been shown to yield
models with a lower memory footprint but similar effectiveness to dense models.
However, sparseness is typically induced starting from a dense model, and thus
this advantage does not hold during training. We propose techniques to enforce
sparseness upfront in recurrent sequence models for NLP applications, to also
benefit training. First, in language modeling, we show how to increase hidden
state sizes in recurrent layers without increasing the number of parameters,
leading to more expressive models. Second, for sequence labeling, we show that
word embeddings with predefined sparseness lead to similar performance as dense
embeddings, at a fraction of the number of trainable parameters.Comment: the SIGNLL Conference on Computational Natural Language Learning
(CoNLL, 2018
Non-negative matrix factorization with sparseness constraints
Non-negative matrix factorization (NMF) is a recently developed technique for
finding parts-based, linear representations of non-negative data. Although it
has successfully been applied in several applications, it does not always
result in parts-based representations. In this paper, we show how explicitly
incorporating the notion of `sparseness' improves the found decompositions.
Additionally, we provide complete MATLAB code both for standard NMF and for our
extension. Our hope is that this will further the application of these methods
to solving novel data-analysis problems
- …