86,393 research outputs found

    Maximizing Expected Utility for Stochastic Combinatorial Optimization Problems

    Full text link
    We study the stochastic versions of a broad class of combinatorial problems where the weights of the elements in the input dataset are uncertain. The class of problems that we study includes shortest paths, minimum weight spanning trees, and minimum weight matchings, and other combinatorial problems like knapsack. We observe that the expected value is inadequate in capturing different types of {\em risk-averse} or {\em risk-prone} behaviors, and instead we consider a more general objective which is to maximize the {\em expected utility} of the solution for some given utility function, rather than the expected weight (expected weight becomes a special case). Under the assumption that there is a pseudopolynomial time algorithm for the {\em exact} version of the problem (This is true for the problems mentioned above), we can obtain the following approximation results for several important classes of utility functions: (1) If the utility function \uti is continuous, upper-bounded by a constant and \lim_{x\rightarrow+\infty}\uti(x)=0, we show that we can obtain a polynomial time approximation algorithm with an {\em additive error} ϵ\epsilon for any constant ϵ>0\epsilon>0. (2) If the utility function \uti is a concave increasing function, we can obtain a polynomial time approximation scheme (PTAS). (3) If the utility function \uti is increasing and has a bounded derivative, we can obtain a polynomial time approximation scheme. Our results recover or generalize several prior results on stochastic shortest path, stochastic spanning tree, and stochastic knapsack. Our algorithm for utility maximization makes use of the separability of exponential utility and a technique to decompose a general utility function into exponential utility functions, which may be useful in other stochastic optimization problems.Comment: 31 pages, Preliminary version appears in the Proceeding of the 52nd Annual IEEE Symposium on Foundations of Computer Science (FOCS 2011), This version contains several new results ( results (2) and (3) in the abstract

    An empirical Bayes procedure for the selection of Gaussian graphical models

    Full text link
    A new methodology for model determination in decomposable graphical Gaussian models is developed. The Bayesian paradigm is used and, for each given graph, a hyper inverse Wishart prior distribution on the covariance matrix is considered. This prior distribution depends on hyper-parameters. It is well-known that the models's posterior distribution is sensitive to the specification of these hyper-parameters and no completely satisfactory method is registered. In order to avoid this problem, we suggest adopting an empirical Bayes strategy, that is a strategy for which the values of the hyper-parameters are determined using the data. Typically, the hyper-parameters are fixed to their maximum likelihood estimations. In order to calculate these maximum likelihood estimations, we suggest a Markov chain Monte Carlo version of the Stochastic Approximation EM algorithm. Moreover, we introduce a new sampling scheme in the space of graphs that improves the add and delete proposal of Armstrong et al. (2009). We illustrate the efficiency of this new scheme on simulated and real datasets

    Parametric estimation of complex mixed models based on meta-model approach

    Full text link
    Complex biological processes are usually experimented along time among a collection of individuals. Longitudinal data are then available and the statistical challenge is to better understand the underlying biological mechanisms. The standard statistical approach is mixed-effects model, with regression functions that are now highly-developed to describe precisely the biological processes (solutions of multi-dimensional ordinary differential equations or of partial differential equation). When there is no analytical solution, a classical estimation approach relies on the coupling of a stochastic version of the EM algorithm (SAEM) with a MCMC algorithm. This procedure needs many evaluations of the regression function which is clearly prohibitive when a time-consuming solver is used for computing it. In this work a meta-model relying on a Gaussian process emulator is proposed to replace this regression function. The new source of uncertainty due to this approximation can be incorporated in the model which leads to what is called a mixed meta-model. A control on the distance between the maximum likelihood estimates in this mixed meta-model and the maximum likelihood estimates obtained with the exact mixed model is guaranteed. Eventually, numerical simulations are performed to illustrate the efficiency of this approach

    Classical Estimation of Multivariate Markov-Switching Models using MSVARlib

    Get PDF
    This paper introduces an upgraded version of MSVARlib, a Gauss and Ox- Gauss compliant library, focusing on Multivariate Markov Switching Regressions in their most general specification. This new set of procedures allows to estimate, through classical optimization methods, models belonging to the MSI(M)(AH)-VARX ``intercept regime dependent'' family. This research enhances the first package MSVARlib 1.1, which has been deeply inspired by the works of Hamilton and Krolzig. Not to mention the extension to a generalized multivariate regression framework, it notably augments the range of models with a possibly unlimited finite number of Markov states, offers automatic or manual intialization procedures and adds new statistical tests. The first part of this article provides the basic theoretical grounds of the related Markov-switching models. Following sections give some illustrations of the programs through univariate and multivariate examples. One is based on a non-linear reading of the american unemployment rate. A second study is focused on coincident stochastic models of US recessions and slowdowns. The paper concludes on possible extensions and new applications. Detailed guidelines in appendices and tutorial programs are provided to help the reader handling the Gauss package and the joined replication files.Multivariate Markov-Switching Regressions, Hidden markov Models, Non linear regressions, Open source Gauss library, Business cycle, EM algorithm, Kittagawa-Hamilton Filtering, Recession Detection Models, MSVAR, MS-VAR, Hamilton's Model, Krolzig MSVAR library,Filtered probabilities, Smoothed probabilities.

    Convergence of MCEM and Related Algorithms for Hidden Markov Random Field

    Get PDF
    The Monte Carlo EM (MCEM) algorithm is a stochastic version of the EM algorithm using MCMC methods to approximate the conditional distribution of the hidden data. In the context of hidden Markov field model-based image segmentation, the behavior of this algorithm has been illustrated in experimental studies but little theoretical results have been established. In this paper new results on MCEM for parameter estimation of the observed data model are presented, showing that under suitable regularity conditions the sequence of MCEM estimates converges to a maximizer of the likelihood of the model. A variant of the Monte Carlo step in the MCEM algorithm is proposed, leading to the Generalized Simulated Field (GSF) algorithm, and it is shown that the two procedures have similar properties

    Parametric estimation of complex mixed models based on meta-model approach

    Get PDF
    International audienceComplex biological processes are usually experimented along time among a collection of individuals. Longitudinal data are then available and the statistical challenge is to better understand the underlying biological mechanisms. The standard statistical approach is mixed-effects model, with regression functions that are now highly-developed to describe precisely the biological processes (solutions of multi-dimensional ordinary differential equations or of partial differential equation). When there is no analytical solution, a classical estimation approach relies on the coupling of a stochastic version of the EM algorithm (SAEM) with a MCMC algorithm. This procedure needs many evaluations of the regression function which is clearly prohibitive when a time-consuming solver is used for computing it. In this work a meta-model relying on a Gaussian process emulator is proposed to replace this regression function. The new source of uncertainty due to this approximation can be incorporated in the model which leads to what is called a mixed meta-model. A control on the distance between the maximum likelihood estimates in this mixed meta-model and the maximum likelihood estimates obtained with the exact mixed model is guaranteed. Eventually, numerical simulations are performed to illustrate the efficiency of this approach

    Convergence of a Particle-based Approximation of the Block Online Expectation Maximization Algorithm

    Full text link
    Online variants of the Expectation Maximization (EM) algorithm have recently been proposed to perform parameter inference with large data sets or data streams, in independent latent models and in hidden Markov models. Nevertheless, the convergence properties of these algorithms remain an open problem at least in the hidden Markov case. This contribution deals with a new online EM algorithm which updates the parameter at some deterministic times. Some convergence results have been derived even in general latent models such as hidden Markov models. These properties rely on the assumption that some intermediate quantities are available in closed form or can be approximated by Monte Carlo methods when the Monte Carlo error vanishes rapidly enough. In this paper, we propose an algorithm which approximates these quantities using Sequential Monte Carlo methods. The convergence of this algorithm and of an averaged version is established and their performance is illustrated through Monte Carlo experiments
    • …
    corecore