170 research outputs found

    Embedding Schemes for Interconnection Networks.

    Get PDF
    Graph embeddings play an important role in interconnection network and VLSI design. Designing efficient embedding strategies for simulating one network by another and determining the number of layers required to build a VLSI chip are just two of the many areas in which graph embeddings are used. In the area of network simulation we develop efficient, small dilation embeddings of a butterfly network into a different size and/or type of butterfly network. The genus of a graph gives an indication of how many layers are required to build a circuit. We have determined the exact genus for the permutation network called the star graph, and have given a lower bound for the genus of the permutation network called the pancake graph. The star graph has been proposed as an alternative to the binary hypercube and, therefore, we compare the genus of the star graph with that of the binary hypercube. Another type of embedding that is helpful in determining the number of layers is a book embedding. We develop upper and lower bounds on the pagenumber of a book embedding of the k-ary hypercube along with an upper bound on the cumulative pagewidth

    The Hierarchy of Hereditary Sorting Operators

    Full text link
    We consider the following general model of a sorting procedure: we fix a hereditary permutation class C\mathcal{C}, which corresponds to the operations that the procedure is allowed to perform in a single step. The input of sorting is a permutation π\pi of the set [n]={1,2,,n}[n]=\{1,2,\dotsc,n\}, i.e., a sequence where each element of [n][n] appears once. In every step, the sorting procedure picks a permutation σ\sigma of length nn from C\mathcal{C}, and rearranges the current permutation of numbers by composing it with σ\sigma. The goal is to transform the input π\pi into the sorted sequence 1,2,,n1,2,\dotsc,n in as few steps as possible. This model of sorting captures not only classical sorting algorithms, like insertion sort or bubble sort, but also sorting by series of devices, like stacks or parallel queues, as well as sorting by block operations commonly considered, e.g., in the context of genome rearrangement. Our goal is to describe the possible asymptotic behavior of the worst-case number of steps needed when sorting with a hereditary permutation class. As the main result, we show that any hereditary permutation class C\mathcal{C} falls into one of five distinct categories. Disregarding the classes that cannot sort all permutations, the number of steps needed to sort any permutation of [n][n] with C\mathcal{C} is either Θ(n2)\Theta(n^2), a function between O(n)O(n) and Ω(n)\Omega(\sqrt{n}), a function betwee O(log2n)O(\log^2 n) and Ω(logn),or\Omega(\log n), or 1$, and for each of these cases we provide a structural characterization of the corresponding hereditary classes

    Plasma-balls in large N gauge theories and localized black holes

    Full text link
    We argue for the existence of plasma-balls - meta-stable, nearly homogeneous lumps of gluon plasma at just above the deconfinement energy density - in a class of large N confining gauge theories that undergo first order deconfinement transitions. Plasma-balls decay over a time scale of order N^2 by thermally radiating hadrons at the deconfinement temperature. In gauge theories that have a dual description that is well approximated by a theory of gravity in a warped geometry, we propose that plasma-balls map to a family of classically stable finite energy black holes localized in the IR. We present a conjecture for the qualitative nature of large mass black holes in such backgrounds, and numerically construct these black holes in a particular class of warped geometries. These black holes have novel properties; in particular their temperature approaches a nonzero constant value at large mass. Black holes dual to plasma-balls shrink as they decay by Hawking radiation; towards the end of this process they resemble ten dimensional Schwarzschild black holes, which we propose are dual to small plasma-balls. Our work may find practical applications in the study of the physics of localized black holes from a dual viewpoint.Comment: harvmac, 33 pages + 7 appendices + 14 figures; program code downloadable from http://schwinger.harvard.edu/~wiseman/IRblackholes ; v2: minor changes ; v3: refs added, minor change

    Part decomposition of 3D surfaces

    Get PDF
    This dissertation describes a general algorithm that automatically decomposes realworld scenes and objects into visual parts. The input to the algorithm is a 3 D triangle mesh that approximates the surfaces of a scene or object. This geometric mesh completely specifies the shape of interest. The output of the algorithm is a set of boundary contours that dissect the mesh into parts where these parts agree with human perception. In this algorithm, shape alone defines the location of a bom1dary contour for a part. The algorithm leverages a human vision theory known as the minima rule that states that human visual perception tends to decompose shapes into parts along lines of negative curvature minima. Specifically, the minima rule governs the location of part boundaries, and as a result the algorithm is known as the Minima Rule Algorithm. Previous computer vision methods have attempted to implement this rule but have used pseudo measures of surface curvature. Thus, these prior methods are not true implementations of the rule. The Minima Rule Algorithm is a three step process that consists of curvature estimation, mesh segmentation, and quality evaluation. These steps have led to three novel algorithms known as Normal Vector Voting, Fast Marching Watersheds, and Part Saliency Metric, respectively. For each algorithm, this dissertation presents both the supporting theory and experimental results. The results demonstrate the effectiveness of the algorithm using both synthetic and real data and include comparisons with previous methods from the research literature. Finally, the dissertation concludes with a summary of the contributions to the state of the art

    Fault-tolerance in two-dimensional topological systems

    Get PDF
    This thesis is a collection of ideas with the general goal of building, at least in the abstract, a local fault-tolerant quantum computer. The connection between quantum information and topology has proven to be an active area of research in several fields. The introduction of the toric code by Alexei Kitaev demonstrated the usefulness of topology for quantum memory and quantum computation. Many quantum codes used for quantum memory are modeled by spin systems on a lattice, with operators that extract syndrome information placed on vertices or faces of the lattice. It is natural to wonder whether the useful codes in such systems can be classified. This thesis presents work that leverages ideas from topology and graph theory to explore the space of such codes. Homological stabilizer codes are introduced and it is shown that, under a set of reasonable assumptions, any qubit homological stabilizer code is equivalent to either a toric code or a color code. Additionally, the toric code and the color code correspond to distinct classes of graphs. Many systems have been proposed as candidate quantum computers. It is very desirable to design quantum computing architectures with two-dimensional layouts and low complexity in parity-checking circuitry. Kitaev\u27s surface codes provided the first example of codes satisfying this property. They provided a new route to fault tolerance with more modest overheads and thresholds approaching 1%. The recently discovered color codes share many properties with the surface codes, such as the ability to perform syndrome extraction locally in two dimensions. Some families of color codes admit a transversal implementation of the entire Clifford group. This work investigates color codes on the 4.8.8 lattice known as triangular codes. I develop a fault-tolerant error-correction strategy for these codes in which repeated syndrome measurements on this lattice generate a three-dimensional space-time combinatorial structure. I then develop an integer program that analyzes this structure and determines the most likely set of errors consistent with the observed syndrome values. I implement this integer program to find the threshold for depolarizing noise on small versions of these triangular codes. Because the threshold for magic-state distillation is likely to be higher than this value and because logical CNOT gates can be performed by code deformation in a single block instead of between pairs of blocks, the threshold for fault-tolerant quantum memory for these codes is also the threshold for fault-tolerant quantum computation with them. Since the advent of a threshold theorem for quantum computers much has been improved upon. Thresholds have increased, architectures have become more local, and gate sets have been simplified. The overhead for magic-state distillation has been studied, but not nearly to the extent of the aforementioned topics. A method for greatly reducing this overhead, known as reusable magic states, is studied here. While examples of reusable magic states exist for Clifford gates, I give strong reasons to believe they do not exist for non-Clifford gates

    Approximation Methods for Non-linear Gravitational Clustering

    Get PDF
    We discuss various analytical approximation methods for following the evolution of cosmological density perturbations into the strong (i.e. nonlinear) clustering regime. These methods can be classified into five types: (i) simple extrapolations from linear theory, such as the high--peak model and the lognormal model; (ii) {\em dynamical} approximations, including the Zel'dovich approximation and its extensions; (iii) non--linear models based on purely geometric considerations, of which the main example is the Voronoi model; (iv) statistical solutions involving scaling arguments, such as the hierarchical closure {\em ansatz} for BBGKY, fractal models and the thermodynamic model of Saslaw; (v) numerical techniques based on particles and/or hydrodynamics. We compare the results of full dynamical evolution using particle codes and the various other approximation schemes. To put the models we discuss into perspective, we give a brief review of the observed properties of galaxy clustering and the statistical methods used to quantify it, such as correlation functions, power spectra, topology and spanning trees.Comment: 175 pages, 20 figures. To appear in Phys. Rep. 1995. Hard copies of figures/Manuscript available upon request from: [email protected]

    Geometric, Algebraic, and Topological Combinatorics

    Get PDF
    The 2019 Oberwolfach meeting "Geometric, Algebraic and Topological Combinatorics" was organized by Gil Kalai (Jerusalem), Isabella Novik (Seattle), Francisco Santos (Santander), and Volkmar Welker (Marburg). It covered a wide variety of aspects of Discrete Geometry, Algebraic Combinatorics with geometric flavor, and Topological Combinatorics. Some of the highlights of the conference included (1) Karim Adiprasito presented his very recent proof of the gg-conjecture for spheres (as a talk and as a "Q\&A" evening session) (2) Federico Ardila gave an overview on "The geometry of matroids", including his recent extension with Denham and Huh of previous work of Adiprasito, Huh and Katz
    corecore