52 research outputs found
Mapping Fusion and Synchronized Hyperedge Replacement into Logic Programming
In this paper we compare three different formalisms that can be used in the
area of models for distributed, concurrent and mobile systems. In particular we
analyze the relationships between a process calculus, the Fusion Calculus,
graph transformations in the Synchronized Hyperedge Replacement with Hoare
synchronization (HSHR) approach and logic programming. We present a translation
from Fusion Calculus into HSHR (whereas Fusion Calculus uses Milner
synchronization) and prove a correspondence between the reduction semantics of
Fusion Calculus and HSHR transitions. We also present a mapping from HSHR into
a transactional version of logic programming and prove that there is a full
correspondence between the two formalisms. The resulting mapping from Fusion
Calculus to logic programming is interesting since it shows the tight analogies
between the two formalisms, in particular for handling name generation and
mobility. The intermediate step in terms of HSHR is convenient since graph
transformations allow for multiple, remote synchronizations, as required by
Fusion Calculus semantics.Comment: 44 pages, 8 figures, to appear in a special issue of Theory and
Practice of Logic Programming, minor revisio
Tur\'an and Ramsey problems for alternating multilinear maps
Guided by the connections between hypergraphs and exterior algebras, we study
Tur\'an and Ramsey type problems for alternating multilinear maps. This study
lies at the intersection of combinatorics, group theory, and algebraic
geometry, and has origins in the works of Lov\'asz (Proc. Sixth British
Combinatorial Conf., 1977), Buhler, Gupta, and Harris (J. Algebra, 1987), and
Feldman and Propp (Adv. Math., 1992).
Our main result is a Ramsey theorem for alternating bilinear maps. Given , , and an alternating bilinear map with , we show that there exists either a dimension-
subspace such that , or a dimension- subspace
such that . This result has natural
group-theoretic (for finite -groups) and geometric (for Grassmannians)
implications, and leads to new Ramsey-type questions for varieties of groups
and Grassmannians.Comment: 20 pages. v3: rewrite introductio
On the Communication Complexity of High-Dimensional Permutations
We study the multiparty communication complexity of high dimensional permutations in the Number On the Forehead (NOF) model. This model is due to Chandra, Furst and Lipton (CFL) who also gave a nontrivial protocol for the Exactly-n problem where three players receive integer inputs and need to decide if their inputs sum to a given integer n. There is a considerable body of literature dealing with the same problem, where (N,+) is replaced by some other abelian group. Our work can be viewed as a far-reaching extension of this line of research. We show that the known lower bounds for that group-theoretic problem apply to all high dimensional permutations. We introduce new proof techniques that reveal new and unexpected connections between NOF communication complexity of permutations and a variety of well-known problems in combinatorics. We also give a direct algorithmic protocol for Exactly-n. In contrast, all previous constructions relied on large sets of integers without a 3-term arithmetic progression
Local Decoders for the 2D and 4D Toric Code
We analyze the performance of decoders for the 2D and 4D toric code which are
local by construction. The 2D decoder is a cellular automaton decoder
formulated by Harrington which explicitly has a finite speed of communication
and computation. For a model of independent and errors and faulty
syndrome measurements with identical probability we report a threshold of
for this Harrington decoder. We implement a decoder for the 4D toric
code which is based on a decoder by Hastings arXiv:1312.2546 . Incorporating a
method for handling faulty syndromes we estimate a threshold of for
the same noise model as in the 2D case. We compare the performance of this
decoder with a decoder based on a 4D version of Toom's cellular automaton rule
as well as the decoding method suggested by Dennis et al.
arXiv:quant-ph/0110143 .Comment: 22 pages, 21 figures; fixed typos, updated Figures 6,7,8,
Suppressing quantum errors by scaling a surface code logical qubit
Practical quantum computing will require error rates that are well below what
is achievable with physical qubits. Quantum error correction offers a path to
algorithmically-relevant error rates by encoding logical qubits within many
physical qubits, where increasing the number of physical qubits enhances
protection against physical errors. However, introducing more qubits also
increases the number of error sources, so the density of errors must be
sufficiently low in order for logical performance to improve with increasing
code size. Here, we report the measurement of logical qubit performance scaling
across multiple code sizes, and demonstrate that our system of superconducting
qubits has sufficient performance to overcome the additional errors from
increasing qubit number. We find our distance-5 surface code logical qubit
modestly outperforms an ensemble of distance-3 logical qubits on average, both
in terms of logical error probability over 25 cycles and logical error per
cycle ( compared to ). To investigate
damaging, low-probability error sources, we run a distance-25 repetition code
and observe a logical error per round floor set by a single
high-energy event ( when excluding this event). We are able
to accurately model our experiment, and from this model we can extract error
budgets that highlight the biggest challenges for future systems. These results
mark the first experimental demonstration where quantum error correction begins
to improve performance with increasing qubit number, illuminating the path to
reaching the logical error rates required for computation.Comment: Main text: 6 pages, 4 figures. v2: Update author list, references,
Fig. S12, Table I
Variational Optimization of finite Projected Entangled Pair States
The present dissertation concerns itself with the many body theory of quantum mechanics. In particular, the Hubbard model is examined, which has served as a testing environment for strongly correlated electron systems since the 1960s, and is not completely described
despite decades of intense research. Here, the focus is on a strong, repulsive electron-electron interaction, and doping slightly below half-filling. The repulsion between electrons favors antiferromagnetic order, while the presence of holes leads to a frustrated configuration,
which can usually not be characterized using perturbative approaches. The reason for examining this particular point in the phase diagram is the conjecture that it is a simplified model of the cuprate superconductors, whose pairing mechanism is not entirely understood despite
their discovery in 1986.
Countless analytical and numerical methods have been developed to calculate the ground state of this parameter set and other complicated models. The method of this thesis uses a tensor network representation, which can be viewed as a means of data compression for quantum mechanics. The most prominent algorithm in this area is the Density-Matrix Renormalization Group (DMRG), which is a reliable method for the ground state calculation of one-dimensional quantum systems. In this context, the present thesis introduces a prototype for the generalization of the DMRG to two dimensions. This is done by representing the electronic wavefunction as a Projected entangled Pair State (PEPS), whose quantum mechanical entanglement is tailored to the structure of a two-dimensional lattice. The ground state can
then be determined through local, variational optimization, which scales linearly with system size.
The thesis is structured as follows: First, the iterative diagonalization is outlined (Sec. 2.1), which is used to determine extremal eigenvalues. It is followed by a detailed description of symmetries within the Hubbard model (Sec. 2.2), since their exploitation is essential for an efficient implementation of tensor networks. Afterwards, the Wigner-Eckart theorem is derived, which is needed for non-abelian symmetries.
Chapter 3 concerns itself with quantum mechanical entanglement and how it can be utilized in many body physics. Sec. 3.1 presents the AKLT model, which serves as a motivation for tensor network representations of ground states. Subsequently, the von Neumann entropy is elucidated (Sec. 3.2), which quantifies the entanglement inside of wavefunctions. Sec. 3.3 makes a connection to physical systems by describing several models and their scaling of the entropy.
Chapter 4 explains elementary tensor operations that take both abelian and non-abelian symmetries into account. The emphasis is less on mathematical rigor than on intelligibility and pragmatism. Mechanisms are often explained using examples, assuming the general case is self-explaining. First, tensors are defined in general (Sec. 4.1), in particular, how their symmetries are taken advantage of and how to store them. There follows an explanation of the permutation of indices (Sec. 4.2), the pairwise contraction of tensors (Sec. 4.3), the fusion and splitting of indices (Sec. 4.4), and the factorization of a tensor into two (Sec. 4.5). Finally, we present an efficient method for contracting multiple tensors, which usually poses the main bottleneck in tensor-network algorithms.
Chapter 5 delivers a compact explanation of the DMRG in the language of matrix product states. Although the DMRG itself is not the goal of this research project, it is worthwhile to describe its general principles, before moving on to PEPS. Multiple concepts can then be used as a stepping stone to treating two dimensions.
Sec. 6.1 finally takes on PEPS itself. Since only open boundary conditions are considered, we have to consider finite PEPS (fPEPS), as opposed to iPEPS, which is fundamentally based on translational invariance. Subsequently, a scheme that adapts the representation of a local Hamiltonian to the topology of a PEPS is presented (Sec. 6.2). This is followed by a detailed explanation of how to determine expectation values approximately (Sec. 6.3), which is one of the central difficulties of the algorithm. Sec. 6.4 finally puts all of the pieces together to define the overarching algorithm for the variational optimization of fPEPSs as used to determine ground states of two-dimensional quantum systems.
In Chapter 7, the fPEPS algorithm is applied to the two-dimensional Hubbard model. First, the influence of the approximate calculation of expectation values is investigated and the error is quantified. Afterwards, a few test simulations are conducted on 3x3 and 8x8 lattices. The algorithm yields a stable convergence of the energy and local charge and spin densities. The local observables resemble those of previous publications qualitatively. However, our version of fPEPS is not yet able to reproduce ground state energies up to more than a couple of significant figures due to some technical subtleties. Finally, Chapter 8 discusses the development status of the optimization in detail, what improvements are pending, and what physical phenomena could be analyzed in the future
Random hypergraphs for hashing-based data structures
This thesis concerns dictionaries and related data structures that rely on providing several random possibilities for storing each key. Imagine information on a set S of m = |S| keys should be stored in n memory locations, indexed by [n] = {1,âŠ,n}. Each object x [ELEMENT OF] S is assigned a small set e(x) [SUBSET OF OR EQUAL TO] [n] of locations by a random hash function, independent of other objects. Information on x must then be stored in the locations from e(x) only. It is possible that too many objects compete for the same locations, in particular if the load c = m/n is high. Successfully storing all information may then be impossible. For most distributions of e(x), however, success or failure can be predicted very reliably, since the success probability is close to 1 for loads c less than a certain load threshold c^* and close to 0 for loads greater than this load threshold. We mainly consider two types of data structures: âą A cuckoo hash table is a dictionary data structure where each key x [ELEMENT OF] S is stored together with an associated value f(x) in one of the memory locations with an index from e(x). The distribution of e(x) is controlled by the hashing scheme. We analyse three known hashing schemes, and determine their exact load thresholds. The schemes are unaligned blocks, double hashing and a scheme for dynamically growing key sets. âą A retrieval data structure also stores a value f(x) for each x [ELEMENT OF] S. This time, the values stored in the memory locations from e(x) must satisfy a linear equation that characterises the value f(x). The resulting data structure is extremely compact, but unusual. It cannot answer questions of the form âis y [ELEMENT OF] S?â. Given a key y it returns a value z. If y [ELEMENT OF] S, then z = f(y) is guaranteed, otherwise z may be an arbitrary value. We consider two new hashing schemes, where the elements of e(x) are contained in one or two contiguous blocks. This yields good access times on a word RAM and high cache efficiency. An important question is whether these types of data structures can be constructed in linear time. The success probability of a natural linear time greedy algorithm exhibits, once again, threshold behaviour with respect to the load c. We identify a hashing scheme that leads to a particularly high threshold value in this regard. In the mathematical model, the memory locations [n] correspond to vertices, and the sets e(x) for x [ELEMENT OF] S correspond to hyperedges. Three properties of the resulting hypergraphs turn out to be important: peelability, solvability and orientability. Therefore, large parts of this thesis examine how hyperedge distribution and load affects the probabilities with which these properties hold and derive corresponding thresholds. Translated back into the world of data structures, we achieve low access times, high memory efficiency and low construction times. We complement and support the theoretical results by experiments.Diese Arbeit behandelt WörterbĂŒcher und verwandte Datenstrukturen, die darauf aufbauen, mehrere zufĂ€llige Möglichkeiten zur Speicherung jedes SchlĂŒssels vorzusehen. Man stelle sich vor, Information ĂŒber eine Menge S von m = |S| SchlĂŒsseln soll in n SpeicherplĂ€tzen abgelegt werden, die durch [n] = {1,âŠ,n} indiziert sind. Jeder SchlĂŒssel x [ELEMENT OF] S bekommt eine kleine Menge e(x) [SUBSET OF OR EQUAL TO] [n] von SpeicherplĂ€tzen durch eine zufĂ€llige Hashfunktion unabhĂ€ngig von anderen SchlĂŒsseln zugewiesen. Die Information ĂŒber x darf nun ausschlieĂlich in den PlĂ€tzen aus e(x) untergebracht werden. Es kann hierbei passieren, dass zu viele SchlĂŒssel um dieselben SpeicherplĂ€tze konkurrieren, insbesondere bei hoher Auslastung c = m/n. Eine erfolgreiche Speicherung der Gesamtinformation ist dann eventuell unmöglich. FĂŒr die meisten Verteilungen von e(x) lĂ€sst sich Erfolg oder Misserfolg allerdings sehr zuverlĂ€ssig vorhersagen, da fĂŒr Auslastung c unterhalb eines gewissen Auslastungsschwellwertes c* die Erfolgswahrscheinlichkeit nahezu 1 ist und fĂŒr c jenseits dieses Auslastungsschwellwertes nahezu 0 ist. HauptsĂ€chlich werden wir zwei Arten von Datenstrukturen betrachten: âą Eine Kuckucks-Hashtabelle ist eine Wörterbuchdatenstruktur, bei der jeder SchlĂŒssel x [ELEMENT OF] S zusammen mit einem assoziierten Wert f(x) in einem der SpeicherplĂ€tze mit Index aus e(x) gespeichert wird. Die Verteilung von e(x) wird hierbei vom Hashing-Schema festgelegt. Wir analysieren drei bekannte Hashing-Schemata und bestimmen erstmals deren exakte Auslastungsschwellwerte im obigen Sinne. Die Schemata sind unausgerichtete Blöcke, Doppel-Hashing sowie ein Schema fĂŒr dynamisch wachsenden SchlĂŒsselmengen. âą Auch eine Retrieval-Datenstruktur speichert einen Wert f(x) fĂŒr alle x [ELEMENT OF] S. Diesmal sollen die Werte in den SpeicherplĂ€tzen aus e(x) eine lineare Gleichung erfĂŒllen, die den Wert f(x) charakterisiert. Die entstehende Datenstruktur ist extrem platzsparend, aber ungewöhnlich: Sie ist ungeeignet um Fragen der Form âist y [ELEMENT OF] S?â zu beantworten. Bei Anfrage eines SchlĂŒssels y wird ein Ergebnis z zurĂŒckgegeben. Falls y [ELEMENT OF] S ist, so ist z = f(y) garantiert, andernfalls darf z ein beliebiger Wert sein. Wir betrachten zwei neue Hashing-Schemata, bei denen die Elemente von e(x) in einem oder in zwei zusammenhĂ€ngenden Blöcken liegen. So werden gute Zugriffszeiten auf Word-RAMs und eine hohe Cache-Effizienz erzielt. Eine wichtige Frage ist, ob Datenstrukturen obiger Art in Linearzeit konstruiert werden können. Die Erfolgswahrscheinlichkeit eines naheliegenden Greedy-Algorithmus weist abermals ein Schwellwertverhalten in Bezug auf die Auslastung c auf. Wir identifizieren ein Hashing-Schema, das diesbezĂŒglich einen besonders hohen Schwellwert mit sich bringt. In der mathematischen Modellierung werden die Speicherpositionen [n] als Knoten und die Mengen e(x) fĂŒr x [ELEMENT OF] S als Hyperkanten aufgefasst. Drei Eigenschaften der entstehenden Hypergraphen stellen sich dann als zentral heraus: SchĂ€lbarkeit, Lösbarkeit und Orientierbarkeit. Weite Teile dieser Arbeit beschĂ€ftigen sich daher mit den Wahrscheinlichkeiten fĂŒr das Vorliegen dieser Eigenschaften abhĂ€ngig von Hashing Schema und Auslastung, sowie mit entsprechenden Schwellwerten. Eine RĂŒckĂŒbersetzung der Ergebnisse liefert dann Datenstrukturen mit geringen Anfragezeiten, hoher Speichereffizienz und geringen Konstruktionszeiten. Die theoretischen Ăberlegungen werden dabei durch experimentelle Ergebnisse ergĂ€nzt und gestĂŒtzt
- âŠ