771 research outputs found
An MDL framework for sparse coding and dictionary learning
The power of sparse signal modeling with learned over-complete dictionaries
has been demonstrated in a variety of applications and fields, from signal
processing to statistical inference and machine learning. However, the
statistical properties of these models, such as under-fitting or over-fitting
given sets of data, are still not well characterized in the literature. As a
result, the success of sparse modeling depends on hand-tuning critical
parameters for each data and application. This work aims at addressing this by
providing a practical and objective characterization of sparse models by means
of the Minimum Description Length (MDL) principle -- a well established
information-theoretic approach to model selection in statistical inference. The
resulting framework derives a family of efficient sparse coding and dictionary
learning algorithms which, by virtue of the MDL principle, are completely
parameter free. Furthermore, such framework allows to incorporate additional
prior information to existing models, such as Markovian dependencies, or to
define completely new problem formulations, including in the matrix analysis
area, in a natural way. These virtues will be demonstrated with parameter-free
algorithms for the classic image denoising and classification problems, and for
low-rank matrix recovery in video applications
Constraining Implicit Space with Minimum Description Length: An Unsupervised Attention Mechanism across Neural Network Layers
Inspired by the adaptation phenomenon of neuronal firing, we propose the
regularity normalization (RN) as an unsupervised attention mechanism (UAM)
which computes the statistical regularity in the implicit space of neural
networks under the Minimum Description Length (MDL) principle. Treating the
neural network optimization process as a partially observable model selection
problem, UAM constrains the implicit space by a normalization factor, the
universal code length. We compute this universal code incrementally across
neural network layers and demonstrated the flexibility to include data priors
such as top-down attention and other oracle information. Empirically, our
approach outperforms existing normalization methods in tackling limited,
imbalanced and non-stationary input distribution in image classification,
classic control, procedurally-generated reinforcement learning, generative
modeling, handwriting generation and question answering tasks with various
neural network architectures. Lastly, UAM tracks dependency and critical
learning stages across layers and recurrent time steps of deep networks
Evaluating Overfit and Underfit in Models of Network Community Structure
A common data mining task on networks is community detection, which seeks an
unsupervised decomposition of a network into structural groups based on
statistical regularities in the network's connectivity. Although many methods
exist, the No Free Lunch theorem for community detection implies that each
makes some kind of tradeoff, and no algorithm can be optimal on all inputs.
Thus, different algorithms will over or underfit on different inputs, finding
more, fewer, or just different communities than is optimal, and evaluation
methods that use a metadata partition as a ground truth will produce misleading
conclusions about general accuracy. Here, we present a broad evaluation of over
and underfitting in community detection, comparing the behavior of 16
state-of-the-art community detection algorithms on a novel and structurally
diverse corpus of 406 real-world networks. We find that (i) algorithms vary
widely both in the number of communities they find and in their corresponding
composition, given the same input, (ii) algorithms can be clustered into
distinct high-level groups based on similarities of their outputs on real-world
networks, and (iii) these differences induce wide variation in accuracy on link
prediction and link description tasks. We introduce a new diagnostic for
evaluating overfitting and underfitting in practice, and use it to roughly
divide community detection methods into general and specialized learning
algorithms. Across methods and inputs, Bayesian techniques based on the
stochastic block model and a minimum description length approach to
regularization represent the best general learning approach, but can be
outperformed under specific circumstances. These results introduce both a
theoretically principled approach to evaluate over and underfitting in models
of network community structure and a realistic benchmark by which new methods
may be evaluated and compared.Comment: 22 pages, 13 figures, 3 table
A Unified Theory of Dual-Process Control
Dual-process theories play a central role in both psychology and
neuroscience, figuring prominently in fields ranging from executive control to
reward-based learning to judgment and decision making. In each of these
domains, two mechanisms appear to operate concurrently, one relatively high in
computational complexity, the other relatively simple. Why is neural
information processing organized in this way? We propose an answer to this
question based on the notion of compression. The key insight is that
dual-process structure can enhance adaptive behavior by allowing an agent to
minimize the description length of its own behavior. We apply a single model
based on this observation to findings from research on executive control,
reward-based learning, and judgment and decision making, showing that seemingly
diverse dual-process phenomena can be understood as domain-specific
consequences of a single underlying set of computational principles
HyperVAE: A Minimum Description Length Variational Hyper-Encoding Network
We propose a framework called HyperVAE for encoding distributions of
distributions. When a target distribution is modeled by a VAE, its neural
network parameters \theta is drawn from a distribution p(\theta) which is
modeled by a hyper-level VAE. We propose a variational inference using Gaussian
mixture models to implicitly encode the parameters \theta into a low
dimensional Gaussian distribution. Given a target distribution, we predict the
posterior distribution of the latent code, then use a matrix-network decoder to
generate a posterior distribution q(\theta). HyperVAE can encode the parameters
\theta in full in contrast to common hyper-networks practices, which generate
only the scale and bias vectors as target-network parameters. Thus HyperVAE
preserves much more information about the model for each task in the latent
space. We discuss HyperVAE using the minimum description length (MDL) principle
and show that it helps HyperVAE to generalize. We evaluate HyperVAE in density
estimation tasks, outlier detection and discovery of novel design classes,
demonstrating its efficacy
Asymptotics of Discrete MDL for Online Prediction
Minimum Description Length (MDL) is an important principle for induction and
prediction, with strong relations to optimal Bayesian learning. This paper
deals with learning non-i.i.d. processes by means of two-part MDL, where the
underlying model class is countable. We consider the online learning framework,
i.e. observations come in one by one, and the predictor is allowed to update
his state of mind after each time step. We identify two ways of predicting by
MDL for this setup, namely a static} and a dynamic one. (A third variant,
hybrid MDL, will turn out inferior.) We will prove that under the only
assumption that the data is generated by a distribution contained in the model
class, the MDL predictions converge to the true values almost surely. This is
accomplished by proving finite bounds on the quadratic, the Hellinger, and the
Kullback-Leibler loss of the MDL learner, which are however exponentially worse
than for Bayesian prediction. We demonstrate that these bounds are sharp, even
for model classes containing only Bernoulli distributions. We show how these
bounds imply regret bounds for arbitrary loss functions. Our results apply to a
wide range of setups, namely sequence prediction, pattern classification,
regression, and universal induction in the sense of Algorithmic Information
Theory among others.Comment: 34 page
Minimum Description Length Control
We propose a novel framework for multitask reinforcement learning based on
the minimum description length (MDL) principle. In this approach, which we term
MDL-control (MDL-C), the agent learns the common structure among the tasks with
which it is faced and then distills it into a simpler representation which
facilitates faster convergence and generalization to new tasks. In doing so,
MDL-C naturally balances adaptation to each task with epistemic uncertainty
about the task distribution. We motivate MDL-C via formal connections between
the MDL principle and Bayesian inference, derive theoretical performance
guarantees, and demonstrate MDL-C's empirical effectiveness on both discrete
and high-dimensional continuous control tasks
- …