1,630 research outputs found

    Holographic View on Quantum Correlations and Mutual Information between Disjoint Blocks of a Quantum Critical System

    Get PDF
    In (d+1) dimensional Multiscale Entanglement Renormalization Ansatz (MERA) networks, tensors are connected so as to reproduce the discrete, (d + 2) holographic geometry of Anti de Sitter space (AdSd+2) with the original system lying at the boundary. We analyze the MERA renormalization flow that arises when computing the quantum correlations between two disjoint blocks of a quantum critical system, to show that the structure of the causal cones characteristic of MERA, requires a transition between two different regimes attainable by changing the ratio between the size and the separation of the two disjoint blocks. We argue that this transition in the MERA causal developments of the blocks may be easily accounted by an AdSd+2 black hole geometry when the mutual information is computed using the Ryu-Takayanagi formula. As an explicit example, we use a BTZ AdS3 black hole to compute the MI and the quantum correlations between two disjoint intervals of a one dimensional boundary critical system. Our results for this low dimensional system not only show the existence of a phase transition emerging when the conformal four point ratio reaches a critical value but also provide an intuitive entropic argument accounting for the source of this instability. We discuss the robustness of this transition when finite temperature and finite size effects are taken into account.Comment: 21 pages, 5 figures. Abstract and Figure 1 has been modified. Minor modifications in Section 1 and Section

    Majorana dimers and holographic quantum error-correcting codes

    Get PDF
    Holographic quantum error-correcting codes have been proposed as toy models that describe key aspects of the anti-de Sitter/conformal field theory (AdS/CFT) correspondence. In this work, we introduce a versatile framework of Majorana dimers capturing the intersection of stabilizer and Gaussian Majorana states. This picture allows for an efficient contraction with a simple diagrammatic interpretation and is amenable to analytical study of holographic quantum error-correcting codes. Equipped with this framework, we revisit the recently proposed hyperbolic pentagon code (HyPeC). Relating its logical code basis to Majorana dimers, we efficiently compute boundary-state properties even for the non-Gaussian case of generic logical input. The dimers characterizing these boundary states coincide with discrete bulk geodesics, leading to a geometric picture from which properties of entanglement, quantum error correction, and bulk/boundary operator mapping immediately follow. We also elaborate upon the emergence of the Ryu-Takayanagi formula from our model, which realizes many of the properties of the recent bit thread proposal. Our work thus elucidates the connection among bulk geometry, entanglement, and quantum error correction in AdS/CFT and lays the foundation for new models of holography

    Efficient Methods for Continuous and Discrete Shape Analysis

    Get PDF
    When interpreting an image of a given object, humans are able to abstract from the presented color information in order to really see the presented object. This abstraction is also known as shape. The concept of shape is not defined exactly in Computer Vision and in this work, we use three different forms of these definitions in order to acquire and analyze shapes. This work is devoted to improve the efficiency of methods that solve important applications of shape analysis. The most important problem in order to analyze shapes is the problem of shape acquisition. To simplify this very challenging problem, numerous researchers have incorporated prior knowledge into the acquisition of shapes. We will present the first approach to acquire shapes given a certain shape knowledge that computes always the global minimum of the involved functional which incorporates a Mumford-Shah like functional with a certain class of shape priors including statistic shape prior and dynamical shape prior. In order to analyze shapes, it is not only important to acquire shapes, but also to classify shapes. In this work, we follow the concept of defining a distance function that measures the dissimilarity of two given shapes. There are two different ways of obtaining such a distance function that we address in this work. Firstly, we model the set of all shapes as a metric space induced by the shortest path on an orbifold. The shortest path will provide us with a shape morphing, i.e., a continuous transformation from one shape into another. Secondly, we address the problem of shape matching that finds corresponding points on two shapes with respect to a preselected feature. Our main contribution for the problem of shape morphing lies in the immense acceleration of the morphing computation. Instead of solving partial resp. ordinary differential equations, we are able to solve this problem via a gradient descent approach that subsequently shortens the length of a path on the given manifold. During our runtime test, we observed a run-time acceleration of up to a factor of 1000. Shape matching is a classical discrete problem. If each shape is discretized by N shape points, most Computer Vision methods needed a cubic run-time. We will provide two approaches how to reduce this worst-case complexity to O(N2 log(N)). One approach exploits the planarity of the involved graph in order to efficiently compute N shortest path in a graph of O(N2) vertices. The other approach computes a minimal cut in a planar graph in O(N log(N)). In order to make this approach applicable to shape matching, we improved the run-time of a recently developed graph cut approach by an empirical factor of 2–4

    Layered Fields for Natural Tessellations on Surfaces

    Get PDF
    Mimicking natural tessellation patterns is a fascinating multi-disciplinary problem. Geometric methods aiming at reproducing such partitions on surface meshes are commonly based on the Voronoi model and its variants, and are often faced with challenging issues such as metric estimation, geometric, topological complications, and most critically parallelization. In this paper, we introduce an alternate model which may be of value for resolving these issues. We drop the assumption that regions need to be separated by lines. Instead, we regard region boundaries as narrow bands and we model the partition as a set of smooth functions layered over the surface. Given an initial set of seeds or regions, the partition emerges as the solution of a time dependent set of partial differential equations describing concurrently evolving fronts on the surface. Our solution does not require geodesic estimation, elaborate numerical solvers, or complicated bookkeeping data structures. The cost per time-iteration is dominated by the multiplication and addition of two sparse matrices. Extension of our approach in a Lloyd's algorithm fashion can be easily achieved and the extraction of the dual mesh can be conveniently preformed in parallel through matrix algebra. As our approach relies mainly on basic linear algebra kernels, it lends itself to efficient implementation on modern graphics hardware.Comment: Natural tessellations, surface fields, Voronoi diagrams, Lloyd's algorith

    A Pseudopolynomial Algorithm for Alexandrov's Theorem

    Full text link
    Alexandrov's Theorem states that every metric with the global topology and local geometry required of a convex polyhedron is in fact the intrinsic metric of a unique convex polyhedron. Recent work by Bobenko and Izmestiev describes a differential equation whose solution leads to the polyhedron corresponding to a given metric. We describe an algorithm based on this differential equation to compute the polyhedron to arbitrary precision given the metric, and prove a pseudopolynomial bound on its running time. Along the way, we develop pseudopolynomial algorithms for computing shortest paths and weighted Delaunay triangulations on a polyhedral surface, even when the surface edges are not shortest paths.Comment: 25 pages; new Delaunay triangulation algorithm, minor other changes; an abbreviated v2 was at WADS 200
    • 

    corecore