341,047 research outputs found

    Critical Slowing-Down in SU(2)SU(2) Landau Gauge-Fixing Algorithms

    Get PDF
    We study the problem of critical slowing-down for gauge-fixing algorithms (Landau gauge) in SU(2)SU(2) lattice gauge theory on a 22-dimensional lattice. We consider five such algorithms, and lattice sizes ranging from 828^{2} to 36236^{2} (up to 64264^2 in the case of Fourier acceleration). We measure four different observables and we find that for each given algorithm they all have the same relaxation time within error bars. We obtain that: the so-called {\em Los Alamos} method has dynamic critical exponent z≈2z \approx 2, the {\em overrelaxation} method and the {\em stochastic overrelaxation} method have z≈1z \approx 1, the so-called {\em Cornell} method has zz slightly smaller than 11 and the {\em Fourier acceleration} method completely eliminates critical slowing-down. A detailed discussion and analysis of the tuning of these algorithms is also presented.Comment: 40 pages (including 10 figures). A few modifications, incorporating referee's suggestions, without the length reduction required for publicatio

    Learning Active Basis Models by EM-Type Algorithms

    Full text link
    EM algorithm is a convenient tool for maximum likelihood model fitting when the data are incomplete or when there are latent variables or hidden states. In this review article we explain that EM algorithm is a natural computational scheme for learning image templates of object categories where the learning is not fully supervised. We represent an image template by an active basis model, which is a linear composition of a selected set of localized, elongated and oriented wavelet elements that are allowed to slightly perturb their locations and orientations to account for the deformations of object shapes. The model can be easily learned when the objects in the training images are of the same pose, and appear at the same location and scale. This is often called supervised learning. In the situation where the objects may appear at different unknown locations, orientations and scales in the training images, we have to incorporate the unknown locations, orientations and scales as latent variables into the image generation process, and learn the template by EM-type algorithms. The E-step imputes the unknown locations, orientations and scales based on the currently learned template. This step can be considered self-supervision, which involves using the current template to recognize the objects in the training images. The M-step then relearns the template based on the imputed locations, orientations and scales, and this is essentially the same as supervised learning. So the EM learning process iterates between recognition and supervised learning. We illustrate this scheme by several experiments.Comment: Published in at http://dx.doi.org/10.1214/09-STS281 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org
    • …
    corecore