3,460 research outputs found
LIMDD A Decision Diagram for Simulation of Quantum Computing Including Stabilizer States
Efficient methods for the representation and simulation of quantum states and
quantum operations are crucial for the optimization of quantum circuits.
Decision diagrams (DDs), a well-studied data structure originally used to
represent Boolean functions, have proven capable of capturing relevant aspects
of quantum systems, but their limits are not well understood. In this work, we
investigate and bridge the gap between existing DD-based structures and the
stabilizer formalism, an important tool for simulating quantum circuits in the
tractable regime. We first show that although DDs were suggested to succinctly
represent important quantum states, they actually require exponential space for
certain stabilizer states. To remedy this, we introduce a more powerful
decision diagram variant, called Local Invertible Map-DD (LIMDD). We prove that
the set of quantum states represented by poly-sized LIMDDs strictly contains
the union of stabilizer states and other decision diagram variants. Finally,
there exist circuits which LIMDDs can efficiently simulate, but which cannot be
efficiently simulated by two state-of-the-art simulation paradigms: the
Clifford + T simulator and Matrix-Product States. By uniting two successful
approaches, LIMDDs thus pave the way for fundamentally more powerful solutions
for simulation and analysis of quantum computing
Model-Checking with Edge-Valued Decision Diagrams
We describe an algebra of Edge-Valued Decision Diagrams (EVMDDs) to encode arithmetic functions and its implementation in a model checking library along with state-of-the-art algorithms for building the transition relation and the state space of discrete state systems. We provide efficient algorithms for manipulating EVMDDs and give upper bounds of the theoretical time complexity of these algorithms for all basic arithmetic and relational operators. We also demonstrate that the time complexity of the generic recursive algorithm for applying a binary operator on EVMDDs is no worse than that of Multi-Terminal Decision Diagrams. We have implemented a new symbolic model checker with the intention to represent in one formalism the best techniques available at the moment across a spectrum of existing tools: EVMDDs for encoding arithmetic expressions, identity-reduced MDDs for representing the transition relation, and the saturation algorithm for reachability analysis. We compare our new symbolic model checking EVMDD library with the widely used CUDD package and show that, in many cases, our tool is several orders of magnitude faster than CUDD
Computação de funções elementares em FPGA
Mestrado em Engenharia Electrónica e TelecomunicaçõesSince C.Y.Lee first proposed the idea of representing switching circuits as
decision diagrams, there has been some interest in developing these diagrams
in order to make them more compact and effective. One of the main applications
of this technique is to represent circuits that perform elementary
functions, such as cosine, sine, square root, etc. In this thesis, we try to
prove that by choosing the right polarity for an Arithmetic Decision Diagram
we can compactly and effectively represent a switching function and
implement it in hardware. This thesis proposes algorithms that can compactly
implement a given elementary function in hardware by finding the
best possible polarity for the respective Arithmetic Decision Diagram.Desde que C.Y.Lee propôs a ideia de representar funções de comutação sob a
forma de diagramas de decisão, tem havido algum interesse em desenvolver
estes diagramas de modo a torná-los mais compactos e eficientes. Uma
das principais aplicações desta técnica é representar circuitos que realizem
funções elementares, como é o caso do seno, coseno, raíz quadrada, etc.
Nesta tese tentamos provar que escolhendo a polaridade certa para um Diagrama
de Decisão Aritmético é possível representar compacta e eficazmente
uma função de comutação e implementá-la em hardware. Esta tese propõe
algoritmos que conseguem implementar compactamente uma dada função
elementar em hardware encontrando a melhor polaridade possível para o
respetivo Diagrama de Decisão Aritmético
Volume-Enclosing Surface Extraction
In this paper we present a new method, which allows for the construction of
triangular isosurfaces from three-dimensional data sets, such as 3D image data
and/or numerical simulation data that are based on regularly shaped, cubic
lattices. This novel volume-enclosing surface extraction technique, which has
been named VESTA, can produce up to six different results due to the nature of
the discretized 3D space under consideration. VESTA is neither template-based
nor it is necessarily required to operate on 2x2x2 voxel cell neighborhoods
only. The surface tiles are determined with a very fast and robust construction
technique while potential ambiguities are detected and resolved. Here, we
provide an in-depth comparison between VESTA and various versions of the
well-known and very popular Marching Cubes algorithm for the very first time.
In an application section, we demonstrate the extraction of VESTA isosurfaces
for various data sets ranging from computer tomographic scan data to simulation
data of relativistic hydrodynamic fireball expansions.Comment: 24 pages, 33 figures, 4 tables, final versio
Formal Concept Analysis Applications in Bioinformatics
Bioinformatics is an important field that seeks to solve biological problems with the help of computation. One specific field in bioinformatics is that of genomics, the study of genes and their functions. Genomics can provide valuable analysis as to the interaction between how genes interact with their environment. One such way to measure the interaction is through gene expression data, which determines whether (and how much) a certain gene activates in a situation. Analyzing this data can be critical for predicting diseases or other biological reactions. One method used for analysis is Formal Concept Analysis (FCA), a computing technique based in partial orders that allows the user to examine the structural properties of binary data based on which subsets of the data set depend on each other. This thesis surveys, in breadth and depth, the current literature related to the use of FCA for bioinformatics, with particular focus on gene expression data. This includes descriptions of current data management techniques specific to FCA, such as lattice reduction, discretization, and variations of FCA to account for different data types. Advantages and shortcomings of using FCA for genomic investigations, as well as the feasibility of using FCA for this application are addressed. Finally, several areas for future doctoral research are proposed.
Adviser: Jitender S. Deogu
Recommended from our members
Function Verification of Combinational Arithmetic Circuits
Hardware design verification is the most challenging part in overall hardware design process. It is because design size and complexity are growing very fast while the requirement for performance is ever higher. Conventional simulation-based verification method cannot keep up with the rapid increase in the design size, since it is impossible to exhaustively test all input vectors of a complex design. An important part of hardware verification is combinational arithmetic circuit verification. It draws a lot of attention because flattening the design into bit-level, known as the bit-blasting problem, hinders the efficiency of many current formal techniques. The goal of this thesis is to introduce a robust and efficient formal verification method for combinational integer arithmetic circuit based on an in-depth analysis of recent advances in computer algebra. The method proposed here solves the verification problem at bit level, while avoiding bit-blasting problem. It also avoids the expensive Groebner basis computation, typically employed by symbolic computer algebra methods. The proposed method verifies the gate-level implementation of the design by representing the design components (logic gates and arithmetic modules) by polynomials in Z2n . It then transforms the polynomial representing the output bits (called “output signature”) into a unique polynomial in input signals (called “input signature”) using gate-level information of the design. The computed input signature is then compared with the reference input signature (golden model) to determine whether the circuit behaves as anticipated. If the reference input signature is not given, our method can be used to compute (or extract) the arithmetic function of the design by computing its input signature. Additional tools, based on canonical word-level design representations (such as TED or BMD) can be used to determine the function of the computed input signature represents. We demonstrate the applicability of the proposed method to arithmetic circuit verification on a large number of designs
- …