13,102 research outputs found
S-estimation of hidden Markov models
A method for robust estimation of dynamic mixtures of multivariate distributions is proposed. The EM algorithm is modified by replacing the classical M-step
with high breakdown S-estimation of location and scatter, performed by using the
bisquare multivariate S-estimator. Estimates are obtained by solving a system of estimating equations that are characterized by component specific sets of weights, based on
robust Mahalanobis-type distances. Convergence of the resulting algorithm is proved
and its finite sample behavior is investigated by means of a brief simulation study and
n application to a multivariate time series of daily returns for seven stock markets
Entropy-based parametric estimation of spike train statistics
We consider the evolution of a network of neurons, focusing on the asymptotic
behavior of spikes dynamics instead of membrane potential dynamics. The spike
response is not sought as a deterministic response in this context, but as a
conditional probability : "Reading out the code" consists of inferring such a
probability. This probability is computed from empirical raster plots, by using
the framework of thermodynamic formalism in ergodic theory. This gives us a
parametric statistical model where the probability has the form of a Gibbs
distribution. In this respect, this approach generalizes the seminal and
profound work of Schneidman and collaborators. A minimal presentation of the
formalism is reviewed here, while a general algorithmic estimation method is
proposed yielding fast convergent implementations. It is also made explicit how
several spike observables (entropy, rate, synchronizations, correlations) are
given in closed-form from the parametric estimation. This paradigm does not
only allow us to estimate the spike statistics, given a design choice, but also
to compare different models, thus answering comparative questions about the
neural code such as : "are correlations (or time synchrony or a given set of
spike patterns, ..) significant with respect to rate coding only ?" A numerical
validation of the method is proposed and the perspectives regarding spike-train
code analysis are also discussed.Comment: 37 pages, 8 figures, submitte
Optimal Kullback-Leibler Aggregation via Information Bottleneck
In this paper, we present a method for reducing a regular, discrete-time
Markov chain (DTMC) to another DTMC with a given, typically much smaller number
of states. The cost of reduction is defined as the Kullback-Leibler divergence
rate between a projection of the original process through a partition function
and a DTMC on the correspondingly partitioned state space. Finding the reduced
model with minimal cost is computationally expensive, as it requires an
exhaustive search among all state space partitions, and an exact evaluation of
the reduction cost for each candidate partition. Our approach deals with the
latter problem by minimizing an upper bound on the reduction cost instead of
minimizing the exact cost; The proposed upper bound is easy to compute and it
is tight if the original chain is lumpable with respect to the partition. Then,
we express the problem in the form of information bottleneck optimization, and
propose using the agglomerative information bottleneck algorithm for searching
a sub-optimal partition greedily, rather than exhaustively. The theory is
illustrated with examples and one application scenario in the context of
modeling bio-molecular interactions.Comment: 13 pages, 4 figure
Source Coding When the Side Information May Be Delayed
For memoryless sources, delayed side information at the decoder does not
improve the rate-distortion function. However, this is not the case for more
general sources with memory, as demonstrated by a number of works focusing on
the special case of (delayed) feedforward. In this paper, a setting is studied
in which the encoder is potentially uncertain about the delay with which
measurements of the side information are acquired at the decoder. Assuming a
hidden Markov model for the sources, at first, a single-letter characterization
is given for the set-up where the side information delay is arbitrary and known
at the encoder, and the reconstruction at the destination is required to be
(near) lossless. Then, with delay equal to zero or one source symbol, a
single-letter characterization is given of the rate-distortion region for the
case where side information may be delayed or not, unbeknownst to the encoder.
The characterization is further extended to allow for additional information to
be sent when the side information is not delayed. Finally, examples for binary
and Gaussian sources are provided.Comment: revised July 201
Privacy-Preserving Adversarial Networks
We propose a data-driven framework for optimizing privacy-preserving data
release mechanisms to attain the information-theoretically optimal tradeoff
between minimizing distortion of useful data and concealing specific sensitive
information. Our approach employs adversarially-trained neural networks to
implement randomized mechanisms and to perform a variational approximation of
mutual information privacy. We validate our Privacy-Preserving Adversarial
Networks (PPAN) framework via proof-of-concept experiments on discrete and
continuous synthetic data, as well as the MNIST handwritten digits dataset. For
synthetic data, our model-agnostic PPAN approach achieves tradeoff points very
close to the optimal tradeoffs that are analytically-derived from model
knowledge. In experiments with the MNIST data, we visually demonstrate a
learned tradeoff between minimizing the pixel-level distortion versus
concealing the written digit.Comment: 16 page
- …