2,494 research outputs found
Design Automation and Design Space Exploration for Quantum Computers
A major hurdle to the deployment of quantum linear systems algorithms and
recent quantum simulation algorithms lies in the difficulty to find inexpensive
reversible circuits for arithmetic using existing hand coded methods. Motivated
by recent advances in reversible logic synthesis, we synthesize arithmetic
circuits using classical design automation flows and tools. The combination of
classical and reversible logic synthesis enables the automatic design of large
components in reversible logic starting from well-known hardware description
languages such as Verilog. As a prototype example for our approach we
automatically generate high quality networks for the reciprocal , which is
necessary for quantum linear systems algorithms.Comment: 6 pages, 1 figure, in 2017 Design, Automation & Test in Europe
Conference & Exhibition, DATE 2017, Lausanne, Switzerland, March 27-31, 201
NOVEL RESOURCE EFFICIENT CIRCUIT DESIGNS FOR REBOOTING COMPUTING
CMOS based computing is reaching its limits. To take computation beyond Moores law (the number of transistors and hence processing power on a chip doubles every 18 months to 3 years) requires research explorations in (i) new materials, devices, and processes, (ii) new architectures and algorithms, (iii) new paradigm of logic bit representation. The focus is on fundamental new ways to compute under the umbrella of rebooting computing such as spintronics, quantum computing, adiabatic and reversible computing. Therefore, this thesis highlights explicitly Quantum computing and Adiabatic logic, two new computing paradigms that come under the umbrella of rebooting computing. Quantum computing is investigated for its promising application in high-performance computing. The first contribution of this thesis is the design of two resource-efficient designs for quantum integer division. The first design is based on non-restoring division algorithm and the second one is based on restoring division algorithm. Both the designs are compared and shown to be superior to the existing work in terms of T-count and T-depth. The proliferation of IoT devices which work on low-power also has drawn interests to the rebooting computing. Hence, the second contribution of this thesis is proving that Adiabatic Logic is a promising candidate for implementation in IoT devices. The adiabatic logic family called Symmetric Pass Gate Adiabatic Logic (SPGAL) is implemented in PRESENT-80 lightweight algorithm. Adiabatic Logic is extended to emerging transistor devices
RESOURCE EFFICIENT DESIGN OF QUANTUM CIRCUITS FOR CRYPTANALYSIS AND SCIENTIFIC COMPUTING APPLICATIONS
Quantum computers offer the potential to extend our abilities to tackle computational problems in fields such as number theory, encryption, search and scientific computation. Up to a superpolynomial speedup has been reported for quantum algorithms in these areas. Motivated by the promise of faster computations, the development of quantum machines has caught the attention of both academics and industry researchers. Quantum machines are now at sizes where implementations of quantum algorithms or their components are now becoming possible. In order to implement quantum algorithms on quantum machines, resource efficient circuits and functional blocks must be designed. In this work, we propose quantum circuits for Galois and integer arithmetic. These quantum circuits are necessary building blocks to realize quantum algorithms.
The design of resource efficient quantum circuits requires the designer takes into account the gate cost, quantum bit (qubit) cost, depth and garbage outputs of a quantum circuit. Existing quantum machines do not have many qubits meaning that circuits with high qubit cost cannot be implemented. In addition, quantum circuits are more prone to errors and garbage output removal adds to overall cost. As more gates are used, a quantum circuit sees an increased rate of failure. Failures and error rates can be countered by using quantum error correcting codes and fault tolerant implementations of universal gate sets (such as Clifford+T gates). However, Clifford+T gates are costly to implement with the T gate being significantly more costly than the Clifford gates. As a result, designers working with Clifford+T gates seek to minimize the number of T gates (T-count) and the depth of T gates (T-depth). In this work, we propose quantum circuits for Galois and integer arithmetic with lower T-count, T-depth and qubit cost than existing work.
This work presents novel quantum circuits for squaring and exponentiation over binary extension fields (Galois fields of form GF(2 m )). The proposed circuits are shown to have lower depth, qubit and gate cost to existing work. We also present quantum circuits for the core operations of multiplication and division which enjoy lower T-count, T-depth and qubit costs compared to existing work. This work also illustrates the design of a T-count and qubit cost efficient design for the square root. This work concludes with an illustration of how the arithmetic circuits can be combined into a functional block to implement quantum image processing algorithms
T-COUNT OPTIMIZATION OF QUANTUM CARRY LOOK-AHEAD ADDER
With the emergence of quantum physics and computer science in the 20th century, a new era was born which can solve very difficult problems in a much faster rate or problems that classical computing just can\u27t solve. In the 21st century, quantum computing needs to be used to solve tough problems in engineering, business, medical, and other fields that required results not today but yesterday. To make this dream come true, engineers in the semiconductor industry need to make the quantum circuits a reality.
To realize quantum circuits and make them scalable, they need to be fault tolerant, therefore Clifford+T gates need to be implemented into those circuits. But the main issue is that in the Clifford+T gate set, T gates are expensive to implement.
Carry Look-Ahead addition circuits have caught the interest of researchers because the number of gate layers encountered by a given qubit in the circuit (or the circuit\u27s depth) is logarithmic in terms of the input size n. Therefore, this thesis focuses on optimizing previous designs of out-of-place and in-place Carry Look-Ahead Adders to decrease the T-count, sum of all T and T Hermitian transpose gates in a quantum circuit
Another Concrete Quantum Cryptanalysis of Binary Elliptic Curves
This paper presents concrete quantum cryptanalysis for binary elliptic
curves for a time-efficient implementation perspective (i.e., reducing the circuit
depth), complementing the previous research by Banegas et al., that focuses on the
space-efficiency perspective (i.e., reducing the circuit width). To achieve the depth
optimization, we propose an improvement to the existing circuit implementation of
the Karatsuba multiplier and FLT-based inversion, then construct and analyze the
resource in Qiskit quantum computer simulator. The proposed multiplier architecture,
improving the quantum Karatsuba multiplier by Van Hoof et al., reduces the
depth and yields lower number of CNOT gates that bounds to O(nlog2(3)) while
maintaining a similar number of Toffoli gates and qubits. Furthermore, our improved
FLT-based inversion reduces CNOT count and overall depth, with a tradeoff
of higher qubit size. Finally, we employ the proposed multiplier and FLT-based inversion
for performing quantum cryptanalysis of binary point addition as well as the
complete Shor’s algorithm for elliptic curve discrete logarithm problem (ECDLP).
As a result, apart from depth reduction, we are also able to reduce up to 90% of the
Toffoli gates required in a single-step point addition compared to prior work, leading
to significant improvements and give a new insights on quantum cryptanalysis for a
depth-optimized implementation
Improving Quantum Circuit Synthesis with Machine Learning
In the Noisy Intermediate Scale Quantum (NISQ) era, finding implementations
of quantum algorithms that minimize the number of expensive and error prone
multi-qubit gates is vital to ensure computations produce meaningful outputs.
Unitary synthesis, the process of finding a quantum circuit that implements
some target unitary matrix, is able to solve this problem optimally in many
cases. However, current bottom-up unitary synthesis algorithms are limited by
their exponentially growing run times. We show how applying machine learning to
unitary datasets permits drastic speedups for synthesis algorithms. This paper
presents QSeed, a seeded synthesis algorithm that employs a learned model to
quickly propose resource efficient circuit implementations of unitaries. QSeed
maintains low gate counts and offers a speedup of in synthesis time
over the state of the art for a 64 qubit modular exponentiation circuit, a core
component in Shor's factoring algorithm. QSeed's performance improvements also
generalize to families of circuits not seen during the training process.Comment: 11 pages, 10 figure
- …