23 research outputs found

    How quickly can we sample a uniform domino tiling of the 2L x 2L square via Glauber dynamics?

    Full text link
    TThe prototypical problem we study here is the following. Given a 2L×2L2L\times 2L square, there are approximately exp(4KL2/π)\exp(4KL^2/\pi ) ways to tile it with dominos, i.e. with horizontal or vertical 2×12\times 1 rectangles, where K0.916K\approx 0.916 is Catalan's constant [Kasteleyn '61, Temperley-Fisher '61]. A conceptually simple (even if computationally not the most efficient) way of sampling uniformly one among so many tilings is to introduce a Markov Chain algorithm (Glauber dynamics) where, with rate 11, two adjacent horizontal dominos are flipped to vertical dominos, or vice-versa. The unique invariant measure is the uniform one and a classical question [Wilson 2004,Luby-Randall-Sinclair 2001] is to estimate the time TmixT_{mix} it takes to approach equilibrium (i.e. the running time of the algorithm). In [Luby-Randall-Sinclair 2001, Randall-Tetali 2000], fast mixin was proven: Tmix=O(LC)T_{mix}=O(L^C) for some finite CC. Here, we go much beyond and show that cL2TmixL2+o(1)c L^2\le T_{mix}\le L^{2+o(1)}. Our result applies to rather general domain shapes (not just the 2L×2L2L\times 2L square), provided that the typical height function associated to the tiling is macroscopically planar in the large LL limit, under the uniform measure (this is the case for instance for the Temperley-type boundary conditions considered in [Kenyon 2000]). Also, our method extends to some other types of tilings of the plane, for instance the tilings associated to dimer coverings of the hexagon or square-hexagon lattices.Comment: to appear on PTRF; 42 pages, 9 figures; v2: typos corrected, references adde

    Random sampling of lattice configurations using local Markov chains

    Get PDF
    Algorithms based on Markov chains are ubiquitous across scientific disciplines, as they provide a method for extracting statistical information about large, complicated systems. Although these algorithms may be applied to arbitrary graphs, many physical applications are more naturally studied under the restriction to regular lattices. We study several local Markov chains on lattices, exploring how small changes to some parameters can greatly influence efficiency of the algorithms. We begin by examining a natural Markov Chain that arises in the context of "monotonic surfaces", where some point on a surface is sightly raised or lowered each step, but with a greater rate of raising than lowering. We show that this chain is rapidly mixing (converges quickly to the equilibrium) using a coupling argument; the novelty of our proof is that it requires defining an exponentially increasing distance function on pairs of surfaces, allowing us to derive near optimal results in many settings. Next, we present new methods for lower bounding the time local chains may take to converge to equilibrium. For many models that we study, there seems to be a phase transition as a parameter is changed, so that the chain is rapidly mixing above a critical point and slow mixing below it. Unfortunately, it is not always possible to make this intuition rigorous. We present the first proofs of slow mixing for three sampling problems motivated by statistical physics and nanotechnology: independent sets on the triangular lattice (the hard-core lattice gas model), weighted even orientations of the two-dimensional Cartesian lattice (the 8-vertex model), and non-saturated Ising (tile-based self-assembly). Previous proofs of slow mixing for other models have been based on contour arguments that allow us prove that a bottleneck in the state space constricts the mixing. The standard contour arguments do not seem to apply to these problems, so we modify this approach by introducing the notion of "fat contours" that can have nontrivial area. We use these to prove that the local chains defined for these models are slow mixing. Finally, we study another important issue that arises in the context of phase transitions in physical systems, namely how the boundary of a lattice can affect the efficiency of the Markov chain. We examine a local chain on the perfect and near-perfect matchings of the square-octagon lattice, and show for one boundary condition the chain will mix in polynomial time, while for another it will mix exponentially slowly. Strikingly, the two boundary conditions only differ at four vertices. These are the first rigorous proofs of such a phenomenon on lattice graphs.Ph.D.Committee Chair: Randall, Dana; Committee Member: Heitsch, Christine; Committee Member: Mihail, Milena; Committee Member: Trotter, Tom; Committee Member: Vigoda, Eri

    Exact sampling with Markov chains

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Mathematics, 1996.Includes bibliographical references (p. 79-83).by David Bruce Wilson.Ph.D

    Classical simulation of measurement-based quantum computation on higher-genus surface-code states

    Full text link
    We consider the efficiency of classically simulating measurement-based quantum computation on surface-code states. We devise a method for calculating the elements of the probability distribution for the classical output of the quantum computation. The operational cost of this method is polynomial in the size of the surface-code state, but in the worst case scales as 22g2^{2g} in the genus gg of the surface embedding the code. However, there are states in the code space for which the simulation becomes efficient. In general, the simulation cost is exponential in the entanglement contained in a certain effective state, capturing the encoded state, the encoding and the local post-measurement states. The same efficiencies hold, with additional assumptions on the temporal order of measurements and on the tessellations of the code surfaces, for the harder task of sampling from the distribution of the computational output.Comment: 21 pages, 13 figure

    Statistical mechanics of dimers on quasiperiodic tilings

    Get PDF
    We study classical dimers on two-dimensional quasiperiodic Ammann-Beenker (AB) tilings. Despite the lack of periodicity we prove that each infinite tiling admits 'perfect matchings' in which every vertex is touched by one dimer. We introduce an auxiliary 'AB^*' tiling obtained from the AB tiling by deleting all 8-fold coordinated vertices. The AB^* tiling is again two-dimensional, infinite, and quasiperiodic. The AB^* tiling has a single connected component, which admits perfect matchings. We find that in all perfect matchings, dimers on the AB^* tiling lie along disjoint one-dimensional loops and ladders, separated by 'membranes', sets of edges where dimers are absent. As a result, the dimer partition function of the AB^* tiling factorizes into the product of dimer partition functions along these structures. We compute the partition function and free energy per edge on the AB^* tiling using an analytic transfer matrix approach. Returning to the AB tiling, we find that membranes in the AB^* tiling become 'pseudomembranes', sets of edges which collectively host at most one dimer. This leads to a remarkable discrete scale-invariance in the matching problem. The structure suggests that the AB tiling should exhibit highly inhomogenous and slowly decaying connected dimer correlations. Using Monte Carlo simulations, we find evidence supporting this supposition in the form of connected dimer correlations consistent with power law behaviour. Within the set of perfect matchings we find quasiperiodic analogues to the staggered and columnar phases observed in periodic systems.Comment: 33 pages, 26 figure

    Fault-tolerance in two-dimensional topological systems

    Get PDF
    This thesis is a collection of ideas with the general goal of building, at least in the abstract, a local fault-tolerant quantum computer. The connection between quantum information and topology has proven to be an active area of research in several fields. The introduction of the toric code by Alexei Kitaev demonstrated the usefulness of topology for quantum memory and quantum computation. Many quantum codes used for quantum memory are modeled by spin systems on a lattice, with operators that extract syndrome information placed on vertices or faces of the lattice. It is natural to wonder whether the useful codes in such systems can be classified. This thesis presents work that leverages ideas from topology and graph theory to explore the space of such codes. Homological stabilizer codes are introduced and it is shown that, under a set of reasonable assumptions, any qubit homological stabilizer code is equivalent to either a toric code or a color code. Additionally, the toric code and the color code correspond to distinct classes of graphs. Many systems have been proposed as candidate quantum computers. It is very desirable to design quantum computing architectures with two-dimensional layouts and low complexity in parity-checking circuitry. Kitaev\u27s surface codes provided the first example of codes satisfying this property. They provided a new route to fault tolerance with more modest overheads and thresholds approaching 1%. The recently discovered color codes share many properties with the surface codes, such as the ability to perform syndrome extraction locally in two dimensions. Some families of color codes admit a transversal implementation of the entire Clifford group. This work investigates color codes on the 4.8.8 lattice known as triangular codes. I develop a fault-tolerant error-correction strategy for these codes in which repeated syndrome measurements on this lattice generate a three-dimensional space-time combinatorial structure. I then develop an integer program that analyzes this structure and determines the most likely set of errors consistent with the observed syndrome values. I implement this integer program to find the threshold for depolarizing noise on small versions of these triangular codes. Because the threshold for magic-state distillation is likely to be higher than this value and because logical CNOT gates can be performed by code deformation in a single block instead of between pairs of blocks, the threshold for fault-tolerant quantum memory for these codes is also the threshold for fault-tolerant quantum computation with them. Since the advent of a threshold theorem for quantum computers much has been improved upon. Thresholds have increased, architectures have become more local, and gate sets have been simplified. The overhead for magic-state distillation has been studied, but not nearly to the extent of the aforementioned topics. A method for greatly reducing this overhead, known as reusable magic states, is studied here. While examples of reusable magic states exist for Clifford gates, I give strong reasons to believe they do not exist for non-Clifford gates

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    A clinical system for the measurement of regional metabolic rates in the brain.

    Get PDF
    The study of the chemical events that regulate the function of the human brain is particularly difficult. The introduction by Hounsfield, in 1973, of a tomographic technique based on the attenuation of X-rays by tissues has proved invaluable in the study of the morphology of the brain. An extension of this technique, employing the concepts of computerised tomography in combination with the use of specific molecules labelled with positron emitters, is now making the direct regional measurement of metabolic rates during life possible. Although some positron tomography systems are available commercially, they do not necessarily fulfil the specific needs of all researchers. Faced with the problem of quantitating the regional distribution of the essential neurotransmitter, dopamine, in the human brain a positron tomography system, which forms the basis of this work, was designed and built based on a series of experiments aimed at optimizing spatial resolution and detection efficiency. The performance of the tomograph has been evaluated through a series of phantom studies; and the system has been used to measure the local cerebral metabolic rate of glucose and the local distribution of dopamine in the healthy and diseased brain. It is felt that the ability of this tomograph to resolve metabolic structures in the brain as small as 10[3] mm[3] will only be surpassed at the cost of unduly increasing the radiation dose to the subject. The results of positron tomographic studies performed using different positron labelled molecules and those obtained using X-ray computerized tomographic techniques and magnetic resonance techniques in the same subject have been compared. The results have been found to be complementary, each technique providing a clue to the proper understanding of the functioning of the brain

    Errata and Addenda to Mathematical Constants

    Full text link
    We humbly and briefly offer corrections and supplements to Mathematical Constants (2003) and Mathematical Constants II (2019), both published by Cambridge University Press. Comments are always welcome.Comment: 162 page

    Inferring Geodesic Cerebrovascular Graphs: Image Processing, Topological Alignment and Biomarkers Extraction

    Get PDF
    A vectorial representation of the vascular network that embodies quantitative features - location, direction, scale, and bifurcations - has many potential neuro-vascular applications. Patient-specific models support computer-assisted surgical procedures in neurovascular interventions, while analyses on multiple subjects are essential for group-level studies on which clinical prediction and therapeutic inference ultimately depend. This first motivated the development of a variety of methods to segment the cerebrovascular system. Nonetheless, a number of limitations, ranging from data-driven inhomogeneities, the anatomical intra- and inter-subject variability, the lack of exhaustive ground-truth, the need for operator-dependent processing pipelines, and the highly non-linear vascular domain, still make the automatic inference of the cerebrovascular topology an open problem. In this thesis, brain vessels’ topology is inferred by focusing on their connectedness. With a novel framework, the brain vasculature is recovered from 3D angiographies by solving a connectivity-optimised anisotropic level-set over a voxel-wise tensor field representing the orientation of the underlying vasculature. Assuming vessels joining by minimal paths, a connectivity paradigm is formulated to automatically determine the vascular topology as an over-connected geodesic graph. Ultimately, deep-brain vascular structures are extracted with geodesic minimum spanning trees. The inferred topologies are then aligned with similar ones for labelling and propagating information over a non-linear vectorial domain, where the branching pattern of a set of vessels transcends a subject-specific quantized grid. Using a multi-source embedding of a vascular graph, the pairwise registration of topologies is performed with the state-of-the-art graph matching techniques employed in computer vision. Functional biomarkers are determined over the neurovascular graphs with two complementary approaches. Efficient approximations of blood flow and pressure drop account for autoregulation and compensation mechanisms in the whole network in presence of perturbations, using lumped-parameters analog-equivalents from clinical angiographies. Also, a localised NURBS-based parametrisation of bifurcations is introduced to model fluid-solid interactions by means of hemodynamic simulations using an isogeometric analysis framework, where both geometry and solution profile at the interface share the same homogeneous domain. Experimental results on synthetic and clinical angiographies validated the proposed formulations. Perspectives and future works are discussed for the group-wise alignment of cerebrovascular topologies over a population, towards defining cerebrovascular atlases, and for further topological optimisation strategies and risk prediction models for therapeutic inference. Most of the algorithms presented in this work are available as part of the open-source package VTrails
    corecore