35 research outputs found

    Random-cluster representation of the Blume-Capel model

    Full text link
    The so-called diluted-random-cluster model may be viewed as a random-cluster representation of the Blume--Capel model. It has three parameters, a vertex parameter aa, an edge parameter pp, and a cluster weighting factor qq. Stochastic comparisons of measures are developed for the `vertex marginal' when q[1,2]q\in[1,2], and the `edge marginal' when q\in[1,\oo). Taken in conjunction with arguments used earlier for the random-cluster model, these permit a rigorous study of part of the phase diagram of the Blume--Capel model

    Dynamic Critical Behavior of a Swendsen-Wang-Type Algorithm for the Ashkin-Teller Model

    Get PDF
    We study the dynamic critical behavior of a Swendsen-Wang-type algorithm for the Ashkin--Teller model. We find that the Li--Sokal bound on the autocorrelation time (τint,Econst×CH\tau_{{\rm int},{\cal E}} \ge {\rm const} \times C_H) holds along the self-dual curve of the symmetric Ashkin--Teller model, and is almost but not quite sharp. The ratio τint,E/CH\tau_{{\rm int},{\cal E}} / C_H appears to tend to infinity either as a logarithm or as a small power (0.05p0.120.05 \leq p \leq 0.12). In an appendix we discuss the problem of extracting estimates of the exponential autocorrelation time.Comment: 59 pages including 3 figures, uuencoded g-compressed ps file. Postscript size = 799740 byte

    On the Coupling Time of the Heat-Bath Process for the Fortuin–Kasteleyn Random–Cluster Model

    Get PDF
    We consider the coupling from the past implementation of the random-cluster heat-bath process, and study its random running time, or coupling time. We focus on hypercubic lattices embedded on tori, in dimensions one to three, with cluster fugacity at least one. We make a number of conjectures regarding the asymptotic behaviour of the coupling time, motivated by rigorous results in one dimension and Monte Carlo simulations in dimensions two and three. Amongst our findings, we observe that, for generic parameter values, the distribution of the appropriately standardized coupling time converges to a Gumbel distribution, and that the standard deviation of the coupling time is asymptotic to an explicit universal constant multiple of the relaxation time. Perhaps surprisingly, we observe these results to hold both off criticality, where the coupling time closely mimics the coupon collector's problem, and also at the critical point, provided the cluster fugacity is below the value at which the transition becomes discontinuous. Finally, we consider analogous questions for the single-spin Ising heat-bath process

    A study of the Phase Diagram of the Micellar Binary Solution Models I: Mean Field Approach

    No full text
    The mean field approximation is used to investigate the phase diagrams in a three-dimensional lattice of micellar binary solutions in the presence of a chemical potential of the amphiphiles for different values of coupling interactions. New phases appear between the water-rich and amphiphile-rich ones. Second and first order transitions, tricritical, multicritical and critical end-points are obtained. By a simple low temperature expansion argument, we have confirmed the existence of states of lowest free energy via mean field approximation.L'approximation du champ moyen est utilisée pour examiner les diagrammes de phases d'un modèle sur réseau à trois dimensions pour les solutions binaires micellaires en présence d'un potentiel chimique des amphiphiles pour différentes valeurs de couplage d'interactions. Cependant, des nouvelles phases apparaissent entre l'eau et l'amphiphile. Des transitions du second et du premier ordre, les points multicritiques, tricritiques et critiques ont été obtenus. Avec un argument simple de développement à basse température nous avons confirmé les états de basse énergie libre en champ moyen

    Scalable balanced training of conditional generative adversarial neural networks on image data

    No full text
    We propose a distributed approach to train deep convolutional generative adversarial neural network (DC-CGANs) models. Our method reduces the imbalance between generator and discriminator by partitioning the training data according to data labels, and enhances scalability by performing a parallel training where multiple generators are concurrently trained, each one of them focusing on a single data label. Performance is assessed in terms of inception score, Fréchet inception distance, and image quality on MNIST, CIFAR10, CIFAR100, and ImageNet1k datasets, showing a significant improvement in comparison to state-of-the-art techniques to training DC-CGANs. Weak scaling is attained on all the four datasets using up to 1000 processes and 2000 NVIDIA V100 GPUs on the OLCF supercomputer Summit
    corecore