2,322 research outputs found

    Error-tolerant Graph Matching on Huge Graphs and Learning Strategies on the Edit Costs

    Get PDF
    Els grafs són estructures de dades abstractes que s'utilitzen per a modelar problemes reals amb dues entitats bàsiques: nodes i arestes. Cada node o vèrtex representa un punt d'interès rellevant d'un problema, i cada aresta representa la relació entre aquests vèrtexs. Els nodes i les arestes podrien incorporar atributs per augmentar la precisió del problema modelat. Degut a aquesta versatilitat, s'han trobat moltes aplicacions en camps com la visió per computador, biomèdics, anàlisi de xarxes, etc. La Distància d'edició de grafs (GED) s'ha convertit en una eina important en el reconeixement de patrons estructurals, ja que permet mesurar la dissimilitud dels grafs. A la primera part d'aquesta tesi es presenta un mètode per generar una parella grafs juntament amb la seva correspondència en un cost computacional lineal. A continuació, se centra en com mesurar la dissimilitud entre dos grafs enormes (més de 10.000 nodes), utilitzant un nou algoritme de aparellament de grafs anomenat Belief Propagation. Té un cost computacional O(d^3.5N). Aquesta tesi també presenta un marc general per aprendre els costos d'edició implicats en els càlculs de la GED automàticament. Després, concretem aquest marc en dos models diferents basats en xarxes neuronals i funcions de densitat de probabilitat. S'ha realitzat una validació pràctica exhaustiva en 14 bases de dades públiques. Aquesta validació mostra que la precisió és major amb els costos d'edició apresos, que amb alguns costos impostos manualment o altres costos apresos automàticament per mètodes anteriors. Finalment proposem una aplicació de l'algoritme Belief propagation utilitzat en la simulació de la mecànica muscular.Los grafos son estructuras de datos abstractos que se utilizan para modelar problemas reales con dos entidades básicas: nodos y aristas. Cada nodo o vértice representa un punto de interés relevante de un problema, y cada arista representa la relación entre estos vértices. Los nodos y las aristas podrían incorporar atributos para aumentar la precisión del problema modelado. Debido a esta versatilidad, se han encontrado muchas aplicaciones en campos como la visión por computador, biomédicos, análisis de redes, etc. La Distancia de edición de grafos (GED) se ha convertido en una herramienta importante en el reconocimiento de patrones estructurales, ya que permite medir la disimilitud de los grafos. En la primera parte de esta tesis se presenta un método para generar una pareja grafos junto con su correspondencia en un coste computacional lineal. A continuación, se centra en cómo medir la disimilitud entre dos grafos enormes (más de 10.000 nodos), utilizando un nuevo algoritmo de emparejamiento de grafos llamado Belief Propagation. Tiene un coste computacional O(d^3.5n). Esta tesis también presenta un marco general para aprender los costos de edición implicados en los cálculos de GED automáticamente. Luego, concretamos este marco en dos modelos diferentes basados en redes neuronales y funciones de densidad de probabilidad. Se ha realizado una validación práctica exhaustiva en 14 bases de datos públicas. Esta validación muestra que la precisión es mayor con los costos de edición aprendidos, que con algunos costos impuestos manualmente u otros costos aprendidos automáticamente por métodos anteriores. Finalmente proponemos una aplicación del algoritmo Belief propagation utilizado en la simulación de la mecánica muscular.Graphs are abstract data structures used to model real problems with two basic entities: nodes and edges. Each node or vertex represents a relevant point of interest of a problem, and each edge represents the relationship between these points. Nodes and edges could be attributed to increase the accuracy of the modeled problem, which means that these attributes could vary from feature vectors to description labels. Due to this versatility, many applications have been found in fields such as computer vision, bio-medics, network analysis, etc. Graph Edit Distance (GED) has become an important tool in structural pattern recognition since it allows to measure the dissimilarity of attributed graphs. The first part presents a method is presented to generate graphs together with an upper and lower bound distance and a correspondence in a linear computational cost. Through this method, the behaviour of the known -or the new- sub-optimal Error-Tolerant graph matching algorithm can be tested against a lower and an upper bound GED on large graphs, even though we do not have the true distance. Next, the present is focused on how to measure the dissimilarity between two huge graphs (more than 10.000 nodes), using a new Error-Tolerant graph matching algorithm called Belief Propagation algorithm. It has a O(d^3.5n) computational cost.This thesis also presents a general framework to learn the edit costs involved in the GED calculations automatically. Then, we concretise this framework in two different models based on neural networks and probability density functions. An exhaustive practical validation on 14 public databases has been performed. This validation shows that the accuracy is higher with the learned edit costs, than with some manually imposed costs or other costs automatically learned by previous methods. Finally we propose an application of the Belief propagation algorithm applied to muscle mechanics

    Fragile boundaries of tailored surface codes

    Full text link
    Biased noise is common in physical qubits, and tailoring a quantum code to the bias by locally modifying stabilizers or changing boundary conditions has been shown to greatly increase error correction thresholds. In this work, we explore the challenges of using a specific tailored code, the XY surface code, for fault-tolerant quantum computation. We introduce efficient and fault-tolerant decoders, belief-matching and belief-find, which exploit correlated hyperedge fault mechanisms present in circuit-level noise. Using belief-matching, we find that the XY surface code has a higher threshold and lower overhead than the square CSS surface code for moderately biased noise. However, the rectangular CSS surface code has a lower qubit overhead than the XY surface code when below threshold. We identify a contributor to the reduced performance that we call fragile boundary errors. These are string-like errors that can occur along spatial or temporal boundaries in planar architectures or during logical state preparation and measurement. While we make partial progress towards mitigating these errors by deforming the boundaries of the XY surface code, our work suggests that fragility could remain a significant obstacle, even for other tailored codes. We expect that our decoders will have other uses; belief-find has an almost-linear running time, and we show that it increases the threshold of the surface code to 0.937(2)% in the presence of circuit-level depolarising noise, compared to 0.817(5)% for the more computationally expensive minimum-weight perfect matching decoder.Comment: 16 pages, 17 figure

    Tailoring surface codes: Improvements in quantum error correction with biased noise

    Get PDF
    For quantum computers to reach their full potential will require error correction. We study the surface code, one of the most promising quantum error correcting codes, in the context of predominantly dephasing (Z-biased) noise, as found in many quantum architectures. We find that the surface code is highly resilient to Y-biased noise, and tailor it to Z-biased noise, whilst retaining its practical features. We demonstrate ultrahigh thresholds for the tailored surface code: ~39% with a realistic bias of = 100, and ~50% with pure Z noise, far exceeding known thresholds for the standard surface code: ~11% with pure Z noise, and ~19% with depolarizing noise. Furthermore, we provide strong evidence that the threshold of the tailored surface code tracks the hashing bound for all biases. We reveal the hidden structure of the tailored surface code with pure Z noise that is responsible for these ultrahigh thresholds. As a consequence, we prove that its threshold with pure Z noise is 50%, and we show that its distance to Z errors, and the number of failure modes, can be tuned by modifying its boundary. For codes with appropriately modified boundaries, the distance to Z errors is O(n) compared to O(n1/2) for square codes, where n is the number of physical qubits. We demonstrate that these characteristics yield a significant improvement in logical error rate with pure Z and Z-biased noise. Finally, we introduce an efficient approach to decoding that exploits code symmetries with respect to a given noise model, and extends readily to the fault-tolerant context, where measurements are unreliable. We use this approach to define a decoder for the tailored surface code with Z-biased noise. Although the decoder is suboptimal, we observe exceptionally high fault-tolerant thresholds of ~5% with bias = 100 and exceeding 6% with pure Z noise. Our results open up many avenues of research and, recent developments in bias-preserving gates, highlight their direct relevance to experiment

    Building Multiple Classifier Systems Using Linear Combinations of Reduced Graphs.

    Get PDF
    Despite great efforts done in research in the last decades, the classification of general graphs, i.e., graphs with unconstrained labeling and structure, remains a challenging task. Due to the inherent relational structure of graphs it is difficult, or even impossible, to apply standard pattern recognition methods to graphs to achieve high recognition accuracies. Common methods to solve the non-trivial problem of graph classification employ graph matching in conjunction with a distance-based classifier or a kernel machine. In the present paper, we address the specific task of graph classification by means of a novel framework that uses information acquired from a broad range of reduced graph subspaces. Our novel approach can be roughly divided into three successive steps. In the first step, differently reduced graphs are created out of the original graphs relying on node centrality measures. In the second step, we compute the graph edit distance between each reduced graph and all the other graphs of the corresponding graph subspace. Finally, we linearly combine the distances in the third step and feed them into a distance-based classifier to obtain the final classification result. On six graph data sets, we empirically confirm that the proposed multiple classifier system directly benefits from the combined distances computed in the various graph subspaces

    Decoding algorithms for surface codes

    Full text link
    Quantum technologies have the potential to solve computationally hard problems that are intractable via classical means. Unfortunately, the unstable nature of quantum information makes it prone to errors. For this reason, quantum error correction is an invaluable tool to make quantum information reliable and enable the ultimate goal of fault-tolerant quantum computing. Surface codes currently stand as the most promising candidates to build error corrected qubits given their two-dimensional architecture, a requirement of only local operations, and high tolerance to quantum noise. Decoding algorithms are an integral component of any error correction scheme, as they are tasked with producing accurate estimates of the errors that affect quantum information, so that it can subsequently be corrected. A critical aspect of decoding algorithms is their speed, since the quantum state will suffer additional errors with the passage of time. This poses a connundrum-like tradeoff, where decoding performance is improved at the expense of complexity and viceversa. In this review, a thorough discussion of state-of-the-art surface code decoding algorithms is provided. The core operation of these methods is described along with existing variants that show promise for improved results. In addition, both the decoding performance, in terms of error correction capability, and decoding complexity, are compared. A review of the existing software tools regarding surface code decoding is also provided.Comment: 54 pages, 31 figure

    Trapping Sets of Quantum LDPC Codes

    Full text link
    Iterative decoders for finite length quantum low-density parity-check (QLDPC) codes are attractive because their hardware complexity scales only linearly with the number of physical qubits. However, they are impacted by short cycles, detrimental graphical configurations known as trapping sets (TSs) present in a code graph as well as symmetric degeneracy of errors. These factors significantly degrade the decoder decoding probability performance and cause so-called error floor. In this paper, we establish a systematic methodology by which one can identify and classify quantum trapping sets (QTSs) according to their topological structure and decoder used. The conventional definition of a TS from classical error correction is generalized to address the syndrome decoding scenario for QLDPC codes. We show that the knowledge of QTSs can be used to design better QLDPC codes and decoders. Frame error rate improvements of two orders of magnitude in the error floor regime are demonstrated for some practical finite-length QLDPC codes without requiring any post-processing.Comment: Revised version - 19 pages, 12 figures - Accepted for publication in Quantu

    Metabolic Network Alignments and their Applications

    Get PDF
    The accumulation of high-throughput genomic and proteomic data allows for the reconstruction of the increasingly large and complex metabolic networks. In order to analyze the accumulated data and reconstructed networks, it is critical to identify network patterns and evolutionary relations between metabolic networks. But even finding similar networks becomes computationally challenging. The dissertation addresses these challenges with discrete optimization and the corresponding algorithmic techniques. Based on the property of the gene duplication and function sharing in biological network,we have formulated the network alignment problem which asks the optimal vertex-to-vertex mapping allowing path contraction, vertex deletion, and vertex insertions. We have proposed the first polynomial time algorithm for aligning an acyclic metabolic pattern pathway with an arbitrary metabolic network. We also have proposed a polynomial-time algorithm for patterns with small treewidth and implemented it for series-parallel patterns which are commonly found among metabolic networks. We have developed the metabolic network alignment tool for free public use. We have performed pairwise mapping of all pathways among five organisms and found a set of statistically significant pathway similarities. We also have applied the network alignment to identifying inconsistency, inferring missing enzymes, and finding potential candidates
    corecore