1,039 research outputs found

    Sensitivity analysis in HMMs with application to likelihood maximization

    Get PDF
    International audienceThis paper considers a sensitivity analysis in Hidden Markov Models with continuous state and observation spaces. We propose an Infinitesimal Perturbation Analysis (IPA) on the filtering distribution with respect to some parameters of the model. We describe a methodology for using any algorithm that estimates the filtering density, such as Sequential Monte Carlo methods, to design an algorithm that estimates its gradient. The resulting IPA estimator is proven to be asymptotically unbiased, consistent and has computational complexity linear in the number of particles. We consider an application of this analysis to the problem of identifying unknown parameters of the model given a sequence of observations. We derive an IPA estimator for the gradient of the log-likelihood, which may be used in a gradient method for the purpose of likelihood maximization. We illustrate the method with several numerical experiments

    Stochastic Approximation Methods for Systems Over an Infinite Horizon

    Full text link
    The paper develops efficient and general stochastic approximation (SA) methods for improving the operation of parametrized systems of either the continuous- or discrete-event dynamical systems types and which are of interest over a long time period. For example, one might wish to optimize or improve the stationary (or average cost per unit time) performance by adjusting the systems parameters. The number of applications and the associated literature are increasing at a rapid rate. This is partly due to the increasing activity in computing pathwise derivatives and adapting them to the average-cost problem. Although the original motivation and the examples come from an interest in the infinite-horizon problem, the techniques and results are of general applicability in SA. We present an updating and review of powerful ordinary differential equation-type methods, in a fairly general context, and based on weak convergence ideas. The results and proof techniques are applicable to a wide variety of applications. Exploiting the full potential of these ideas can greatly simplify and extend much current work. Their breadth as well as the relative ease of using the basic ideas are illustrated in detail via typical examples drawn from discrete-event dynamical systems, piecewise deterministic dynamical systems, and a stochastic differential equations model. In these particular illustrations, we use either infinitesimal perturbation analysis-type estimators, mean square derivative-type estimators, or finite-difference type estimators. Markov and non-Markov models are discussed. The algorithms for distributed/asynchronous updating as well as the fully synchronous schemes are developed

    The robustness of democratic consensus

    Full text link
    In linear models of consensus dynamics, the state of the various agents converges to a value which is a convex combination of the agents' initial states. We call it democratic if in the large scale limit (number of agents going to infinity) the vector of convex weights converges to 0 uniformly. Democracy is a relevant property which naturally shows up when we deal with opinion dynamic models and cooperative algorithms such as consensus over a network: it says that each agent's measure/opinion is going to play a negligeable role in the asymptotic behavior of the global system. It can be seen as a relaxation of average consensus, where all agents have exactly the same weight in the final value, which becomes negligible for a large number of agents.Comment: 13 pages, 2 fig

    Maximum likelihood estimation by monte carlo simulation:Toward data-driven stochastic modeling

    Get PDF
    We propose a gradient-based simulated maximum likelihood estimation to estimate unknown parameters in a stochastic model without assuming that the likelihood function of the observations is available in closed form. A key element is to develop Monte Carlo-based estimators for the density and its derivatives for the output process, using only knowledge about the dynamics of the model. We present the theory of these estimators and demonstrate how our approach can handle various types of model structures. We also support our findings and illustrate the merits of our approach with numerical results
    • …
    corecore