4,963 research outputs found
Forecasting the CATS benchmark with the Double Vector Quantization method
The Double Vector Quantization method, a long-term forecasting method based
on the SOM algorithm, has been used to predict the 100 missing values of the
CATS competition data set. An analysis of the proposed time series is provided
to estimate the dimension of the auto-regressive part of this nonlinear
auto-regressive forecasting method. Based on this analysis experimental results
using the Double Vector Quantization (DVQ) method are presented and discussed.
As one of the features of the DVQ method is its ability to predict scalars as
well as vectors of values, the number of iterative predictions needed to reach
the prediction horizon is further observed. The method stability for the long
term allows obtaining reliable values for a rather long-term forecasting
horizon.Comment: Accepted for publication in Neurocomputing, Elsevie
Greedy vector quantization
We investigate the greedy version of the -optimal vector quantization
problem for an -valued random vector . We show the
existence of a sequence such that minimizes
(-mean quantization error at level induced by
). We show that this sequence produces -rate
optimal -tuples ( the -mean
quantization error at level induced by goes to at rate
). Greedy optimal sequences also satisfy, under natural
additional assumptions, the distortion mismatch property: the -tuples
remain rate optimal with respect to the -norms, .
Finally, we propose optimization methods to compute greedy sequences, adapted
from usual Lloyd's I and Competitive Learning Vector Quantization procedures,
either in their deterministic (implementable when ) or stochastic
versions.Comment: 31 pages, 4 figures, few typos corrected (now an extended version of
an eponym paper to appear in Journal of Approximation
Role of homeostasis in learning sparse representations
Neurons in the input layer of primary visual cortex in primates develop
edge-like receptive fields. One approach to understanding the emergence of this
response is to state that neural activity has to efficiently represent sensory
data with respect to the statistics of natural scenes. Furthermore, it is
believed that such an efficient coding is achieved using a competition across
neurons so as to generate a sparse representation, that is, where a relatively
small number of neurons are simultaneously active. Indeed, different models of
sparse coding, coupled with Hebbian learning and homeostasis, have been
proposed that successfully match the observed emergent response. However, the
specific role of homeostasis in learning such sparse representations is still
largely unknown. By quantitatively assessing the efficiency of the neural
representation during learning, we derive a cooperative homeostasis mechanism
that optimally tunes the competition between neurons within the sparse coding
algorithm. We apply this homeostasis while learning small patches taken from
natural images and compare its efficiency with state-of-the-art algorithms.
Results show that while different sparse coding algorithms give similar coding
results, the homeostasis provides an optimal balance for the representation of
natural images within the population of neurons. Competition in sparse coding
is optimized when it is fair. By contributing to optimizing statistical
competition across neurons, homeostasis is crucial in providing a more
efficient solution to the emergence of independent components
Automated Pruning for Deep Neural Network Compression
In this work we present a method to improve the pruning step of the current
state-of-the-art methodology to compress neural networks. The novelty of the
proposed pruning technique is in its differentiability, which allows pruning to
be performed during the backpropagation phase of the network training. This
enables an end-to-end learning and strongly reduces the training time. The
technique is based on a family of differentiable pruning functions and a new
regularizer specifically designed to enforce pruning. The experimental results
show that the joint optimization of both the thresholds and the network weights
permits to reach a higher compression rate, reducing the number of weights of
the pruned network by a further 14% to 33% compared to the current
state-of-the-art. Furthermore, we believe that this is the first study where
the generalization capabilities in transfer learning tasks of the features
extracted by a pruned network are analyzed. To achieve this goal, we show that
the representations learned using the proposed pruning methodology maintain the
same effectiveness and generality of those learned by the corresponding
non-compressed network on a set of different recognition tasks.Comment: 8 pages, 5 figures. Published as a conference paper at ICPR 201
Two-scale large deviations for chemical reaction kinetics through second quantization path integral
Motivated by the study of rare events for a typical genetic switching model
in systems biology, in this paper we aim to establish the general two-scale
large deviations for chemical reaction systems. We build a formal approach to
explicitly obtain the large deviation rate functionals for the considered
two-scale processes based upon the second-quantization path integral technique.
We get three important types of large deviation results when the underlying two
times scales are in three different regimes. This is realized by singular
perturbation analysis to the rate functionals obtained by path integral. We
find that the three regimes possess the same deterministic mean-field limit but
completely different chemical Langevin approximations. The obtained results are
natural extensions of the classical large volume limit for chemical reactions.
We also discuss its implication on the single-molecule Michaelis-Menten
kinetics. Our framework and results can be applied to understand general
multi-scale systems including diffusion processes
- …