318 research outputs found

    Convergence of a Particle-based Approximation of the Block Online Expectation Maximization Algorithm

    Full text link
    Online variants of the Expectation Maximization (EM) algorithm have recently been proposed to perform parameter inference with large data sets or data streams, in independent latent models and in hidden Markov models. Nevertheless, the convergence properties of these algorithms remain an open problem at least in the hidden Markov case. This contribution deals with a new online EM algorithm which updates the parameter at some deterministic times. Some convergence results have been derived even in general latent models such as hidden Markov models. These properties rely on the assumption that some intermediate quantities are available in closed form or can be approximated by Monte Carlo methods when the Monte Carlo error vanishes rapidly enough. In this paper, we propose an algorithm which approximates these quantities using Sequential Monte Carlo methods. The convergence of this algorithm and of an averaged version is established and their performance is illustrated through Monte Carlo experiments

    Identifiability and consistent estimation of nonparametric translation hidden Markov models with general state space

    Get PDF
    This paper considers hidden Markov models where the observations are given as the sum of a latent state which lies in a general state space and some independent noise with unknown distribution. It is shown that these fully nonparametric translation models are identifiable with respect to both the distribution of the latent variables and the distribution of the noise, under mostly a light tail assumption on the latent variables. Two nonparametric estimation methods are proposed and we prove that the corresponding estimators are consistent for the weak convergence topology. These results are illustrated with numerical experiments

    Rocky Road to Dublin: the Influence of the French Nouvelle Vague on Irish Documentary Film

    Get PDF
    Rocky Road to Dublin was certainly one of the first, if not the very first Irish film ever selected for inclusion in the worldwide famous Cannes festival. Unfortunately, this was in 1968 and Jean-Luc Godard, along with other nouvelle vague filmmakers, insisted on closing down the festival after only a few days. We will examine the exchanges that occurred between Irish and French culture in the making of this independent documentary film, how it was received, and the film’s notoriety in Ireland and in France from 1968 until today. We will question Lennon’s ‘personal attempt to reconstruct with a camera the plight of an island community which survived more than 700 years of English occupation, and then nearly sank under the weight of its own heroes and clergy.’1 The film’s aesthetics will be examined as Lennon’s voice-over and comments are related to Coutard’s visual style. Two major excerpts from the film will be highlighted before we move on to the film’s relevance in the last two decades

    Learning the distribution of latent variables in paired comparison models with round-robin scheduling

    Full text link
    Paired comparison data considered in this paper originate from the comparison of a large number N of individuals in couples. The dataset is a collection of results of contests between two individuals when each of them has faced n opponents, where n is much larger than N. Individual are represented by independent and identically distributed random parameters characterizing their abilities.The paper studies the maximum likelihood estimator of the parameters distribution. The analysis relies on the construction of a graphical model encoding conditional dependencies of the observations which are the outcomes of the first n contests each individual is involved in. This graphical model allows to prove geometric loss of memory properties and deduce the asymptotic behavior of the likelihood function. This paper sets the focus on graphical models obtained from round-robin scheduling of these contests.Following a classical construction in learning theory, the asymptotic likelihood is used to measure performance of the maximum likelihood estimator. Risk bounds for this estimator are finally obtained by sub-Gaussian deviation results for Markov chains applied to the graphical model

    Online Sequential Monte Carlo smoother for partially observed stochastic differential equations

    Full text link
    This paper introduces a new algorithm to approximate smoothed additive functionals for partially observed stochastic differential equations. This method relies on a recent procedure which allows to compute such approximations online, i.e. as the observations are received, and with a computational complexity growing linearly with the number of Monte Carlo samples. This online smoother cannot be used directly in the case of partially observed stochastic differential equations since the transition density of the latent data is usually unknown. We prove that a similar algorithm may still be defined for partially observed continuous processes by replacing this unknown quantity by an unbiased estimator obtained for instance using general Poisson estimators. We prove that this estimator is consistent and its performance are illustrated using data from two models

    Consistent estimation of the filtering and marginal smoothing distributions in nonparametric hidden Markov models

    Full text link
    In this paper, we consider the filtering and smoothing recursions in nonparametric finite state space hidden Markov models (HMMs) when the parameters of the model are unknown and replaced by estimators. We provide an explicit and time uniform control of the filtering and smoothing errors in total variation norm as a function of the parameter estimation errors. We prove that the risk for the filtering and smoothing errors may be uniformly upper bounded by the risk of the estimators. It has been proved very recently that statistical inference for finite state space nonparametric HMMs is possible. We study how the recent spectral methods developed in the parametric setting may be extended to the nonparametric framework and we give explicit upper bounds for the L2-risk of the nonparametric spectral estimators. When the observation space is compact, this provides explicit rates for the filtering and smoothing errors in total variation norm. The performance of the spectral method is assessed with simulated data for both the estimation of the (nonparametric) conditional distribution of the observations and the estimation of the marginal smoothing distributions.Comment: 27 pages, 2 figures. arXiv admin note: text overlap with arXiv:1501.0478
    • …
    corecore