67,918 research outputs found

    Critical Slowing-Down in SU(2)SU(2) Landau Gauge-Fixing Algorithms

    Get PDF
    We study the problem of critical slowing-down for gauge-fixing algorithms (Landau gauge) in SU(2)SU(2) lattice gauge theory on a 22-dimensional lattice. We consider five such algorithms, and lattice sizes ranging from 828^{2} to 36236^{2} (up to 64264^2 in the case of Fourier acceleration). We measure four different observables and we find that for each given algorithm they all have the same relaxation time within error bars. We obtain that: the so-called {\em Los Alamos} method has dynamic critical exponent z≈2z \approx 2, the {\em overrelaxation} method and the {\em stochastic overrelaxation} method have z≈1z \approx 1, the so-called {\em Cornell} method has zz slightly smaller than 11 and the {\em Fourier acceleration} method completely eliminates critical slowing-down. A detailed discussion and analysis of the tuning of these algorithms is also presented.Comment: 40 pages (including 10 figures). A few modifications, incorporating referee's suggestions, without the length reduction required for publicatio

    The EM Algorithm in Multivariate Gaussian Mixture Models using Anderson Acceleration

    Get PDF
    Over the years analysts have used the EM algorithm to obtain maximum likelihood estimates from incomplete data for various models. The general algorithm admits several appealing properties such as strong global convergence; however, the rate of convergence is linear which in some cases may be unacceptably slow. This work is primarily concerned with applying Anderson acceleration to the EM algorithm for Gaussian mixture models (GMM) in hopes of alleviating slow convergence. As preamble we provide a review of maximum likelihood estimation and derive the EM algorithm in detail. The iterates that correspond to the GMM are then formulated and examples are provided. These examples show how faster convergence is experienced when the data are well separated, whereas much slower convergence is seen whenever the sample is poorly separated. The Anderson acceleration method is then presented, and its connection to the EM algorithm is discussed. The work is then concluded by applying Anderson acceleration to the EM algorithm which results in reducing the number of iterations required to obtain convergence

    Relaxed Ordered Subsets Algorithm for Image Restoration of Confocal Microscopy

    Full text link
    The expectation-maximization (EM) algorithm for maximum-likelihood image recovery converges very slowly. Thus, the ordered subsets EM (OS-EM) algorithm has been widely used in image reconstruction for tomography due to an order-of-magnitude acceleration over the EM algorithm. However, OS-EM is not guaranteed to converge. The recently proposed ordered subsets, separable paraboloidal surrogates (OS-SPS) algorithm with relaxation has been shown to converge to the optimal point while providing fast convergence. In this paper, we develop a relaxed OS-SPS algorithm for image restoration. Because data acquisition is different in image restoration than in tomography, we adapt a different strategy for choosing subsets in image restoration which uses pixel location rather than projection angles. Simulation results show that the order-of-magnitude acceleration of the relaxed OS-SPS algorithm can be achieved in restoration. Thus the speed and the guarantee of the convergence of the OS algorithm is advantageous for image restoration as well.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85875/1/Fessler174.pd

    Relaxed Ordered-Subset Algorithm for Penalized-Likelihood Image Restoration

    Full text link
    The expectation-maximization (EM) algorithm for maximum-likelihood image recovery is guaranteed to converge, but it converges slowly. Its ordered-subset version (OS-EM) is used widely in tomographic image reconstruction because of its order-of-magnitude acceleration compared with the EM algorithm, but it does not guarantee convergence. Recently the ordered-subset, separable-paraboloidal-surrogate (OS-SPS) algorithm with relaxation has been shown to converge to the optimal point while providing fast convergence. We adapt the relaxed OS-SPS algorithm to the problem of image restoration. Because data acquisition in image restoration is different from that in tomography, we employ a different strategy for choosing subsets, using pixel locations rather than projection angles. Simulation results show that the relaxed OS-SPS algorithm can provide an order-of-magnitude acceleration over the EM algorithm for image restoration. This new algorithm now provides the speed and guaranteed convergence necessary for efficient image restoration.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85918/1/Fessler68.pd

    On the Variance Covariance Matrix of the Maximum Likelihood Estimator of a Discrete Mixture

    Get PDF
    The estimation of models involving discrete mixtures is a common practice in econometrics, for example to account for unobserved heterogeneity. However, the literature is relatively uninformative about the measurement of the precision of the parameters. This note provides an analytical expression for the observed information matrix in terms of the gradient and hessian of the latent model when the number of components of the discrete mixture is known. This in turn allows for the estimation of the variance covariance matrix of the ML estimator of the parameters. I discuss further two possible applications of the result: the acceleration of the EM algorithm and the specification testing with the information matrix test.Discrete Mixtures; EM Algorithm, Variance Covariance Matrix; Observed Information

    An Unsupervised Approach for Automatic Activity Recognition based on Hidden Markov Model Regression

    Full text link
    Using supervised machine learning approaches to recognize human activities from on-body wearable accelerometers generally requires a large amount of labelled data. When ground truth information is not available, too expensive, time consuming or difficult to collect, one has to rely on unsupervised approaches. This paper presents a new unsupervised approach for human activity recognition from raw acceleration data measured using inertial wearable sensors. The proposed method is based upon joint segmentation of multidimensional time series using a Hidden Markov Model (HMM) in a multiple regression context. The model is learned in an unsupervised framework using the Expectation-Maximization (EM) algorithm where no activity labels are needed. The proposed method takes into account the sequential appearance of the data. It is therefore adapted for the temporal acceleration data to accurately detect the activities. It allows both segmentation and classification of the human activities. Experimental results are provided to demonstrate the efficiency of the proposed approach with respect to standard supervised and unsupervised classification approache

    MM Algorithms for Minimizing Nonsmoothly Penalized Objective Functions

    Full text link
    In this paper, we propose a general class of algorithms for optimizing an extensive variety of nonsmoothly penalized objective functions that satisfy certain regularity conditions. The proposed framework utilizes the majorization-minimization (MM) algorithm as its core optimization engine. The resulting algorithms rely on iterated soft-thresholding, implemented componentwise, allowing for fast, stable updating that avoids the need for any high-dimensional matrix inversion. We establish a local convergence theory for this class of algorithms under weaker assumptions than previously considered in the statistical literature. We also demonstrate the exceptional effectiveness of new acceleration methods, originally proposed for the EM algorithm, in this class of problems. Simulation results and a microarray data example are provided to demonstrate the algorithm's capabilities and versatility.Comment: A revised version of this paper has been published in the Electronic Journal of Statistic

    Convergence Results for the EM Approach to Mixtures of Experts Architectures

    Get PDF
    The Expectation-Maximization (EM) algorithm is an iterative approach to maximum likelihood parameter estimation. Jordan and Jacobs (1993) recently proposed an EM algorithm for the mixture of experts architecture of Jacobs, Jordan, Nowlan and Hinton (1991) and the hierarchical mixture of experts architecture of Jordan and Jacobs (1992). They showed empirically that the EM algorithm for these architectures yields significantly faster convergence than gradient ascent. In the current paper we provide a theoretical analysis of this algorithm. We show that the algorithm can be regarded as a variable metric algorithm with its searching direction having a positive projection on the gradient of the log likelihood. We also analyze the convergence of the algorithm and provide an explicit expression for the convergence rate. In addition, we describe an acceleration technique that yields a significant speedup in simulation experiments

    A broadband stable addition theorem for the two dimensional MLFMA

    Get PDF
    Integral equations arising from the time-harmonic Maxwell equations contain the Green function of the Helmholtz equation as the integration kernel. The structure of this Green function has allowed the development of so-called fast multipole methods (FMMs), i.e. methods for accelerating the matrix-vector products that are required for the iterative solution of integral equations. Arguably the most widely used FMM is the multilevel fast multipole algorithm (MLFMA). It allows the simulation of electrically large structures that are intractable with direct or iterative solvers without acceleration. The practical importance of the MLFMA is made all the more clear by its implementation in various commercial EM software packages such as FEKO and CST Microwave studio
    • …
    corecore