166,177 research outputs found
Expectation-Maximization Binary Clustering for Behavioural Annotation
We present a variant of the well sounded Expectation-Maximization Clustering
algorithm that is constrained to generate partitions of the input space into
high and low values. The motivation of splitting input variables into high and
low values is to favour the semantic interpretation of the final clustering.
The Expectation-Maximization binary Clustering is specially useful when a
bimodal conditional distribution of the variables is expected or at least when
a binary discretization of the input space is deemed meaningful. Furthermore,
the algorithm deals with the reliability of the input data such that the larger
their uncertainty the less their role in the final clustering. We show here its
suitability for behavioural annotation of movement trajectories. However, it
can be considered as a general purpose algorithm for the clustering or
segmentation of multivariate data or temporal series.Comment: 34 pages main text including 11 (full page) figure
Expectation-maximization for logistic regression
We present a family of expectation-maximization (EM) algorithms for binary
and negative-binomial logistic regression, drawing a sharp connection with the
variational-Bayes algorithm of Jaakkola and Jordan (2000). Indeed, our results
allow a version of this variational-Bayes approach to be re-interpreted as a
true EM algorithm. We study several interesting features of the algorithm, and
of this previously unrecognized connection with variational Bayes. We also
generalize the approach to sparsity-promoting priors, and to an online method
whose convergence properties are easily established. This latter method
compares favorably with stochastic-gradient descent in situations with marked
collinearity
A Tutorial on the Expectation-Maximization Algorithm Including Maximum-Likelihood Estimation and EM Training of Probabilistic Context-Free Grammars
The paper gives a brief review of the expectation-maximization algorithm
(Dempster 1977) in the comprehensible framework of discrete mathematics. In
Section 2, two prominent estimation methods, the relative-frequency estimation
and the maximum-likelihood estimation are presented. Section 3 is dedicated to
the expectation-maximization algorithm and a simpler variant, the generalized
expectation-maximization algorithm. In Section 4, two loaded dice are rolled. A
more interesting example is presented in Section 5: The estimation of
probabilistic context-free grammars.Comment: Presented at the 15th European Summer School in Logic, Language and
Information (ESSLLI 2003). Example 5 extended (and partially corrected
Foreword
This report reviews the Expectation Maximization EM algorithm and applies it to the data segmentation problem yielding the Expectation Maximization Segmentation EMS algorithm The EMS algorithm requires batch processing of the data and can be applied to mode switching or jumping linear dynamical state space models The EMS algorithm consists of an optimal fusion of fixed interval Kalman smoothing and discrete optimization. The next section gives a short introduction to the EM algorithm with some background and convergence results In Section the data segmentation problem is dened and in Section the EM algorithm is applied to this problem Section contains simulation results and Section some conclusive remarks
- …
