15,949 research outputs found
Optimized Surface Code Communication in Superconducting Quantum Computers
Quantum computing (QC) is at the cusp of a revolution. Machines with 100
quantum bits (qubits) are anticipated to be operational by 2020
[googlemachine,gambetta2015building], and several-hundred-qubit machines are
around the corner. Machines of this scale have the capacity to demonstrate
quantum supremacy, the tipping point where QC is faster than the fastest
classical alternative for a particular problem. Because error correction
techniques will be central to QC and will be the most expensive component of
quantum computation, choosing the lowest-overhead error correction scheme is
critical to overall QC success. This paper evaluates two established quantum
error correction codes---planar and double-defect surface codes---using a set
of compilation, scheduling and network simulation tools. In considering
scalable methods for optimizing both codes, we do so in the context of a full
microarchitectural and compiler analysis. Contrary to previous predictions, we
find that the simpler planar codes are sometimes more favorable for
implementation on superconducting quantum computers, especially under
conditions of high communication congestion.Comment: 14 pages, 9 figures, The 50th Annual IEEE/ACM International Symposium
on Microarchitectur
Limits on Fundamental Limits to Computation
An indispensable part of our lives, computing has also become essential to
industries and governments. Steady improvements in computer hardware have been
supported by periodic doubling of transistor densities in integrated circuits
over the last fifty years. Such Moore scaling now requires increasingly heroic
efforts, stimulating research in alternative hardware and stirring controversy.
To help evaluate emerging technologies and enrich our understanding of
integrated-circuit scaling, we review fundamental limits to computation: in
manufacturing, energy, physical space, design and verification effort, and
algorithms. To outline what is achievable in principle and in practice, we
recall how some limits were circumvented, compare loose and tight limits. We
also point out that engineering difficulties encountered by emerging
technologies may indicate yet-unknown limits.Comment: 15 pages, 4 figures, 1 tabl
Noise-Adaptive Compiler Mappings for Noisy Intermediate-Scale Quantum Computers
A massive gap exists between current quantum computing (QC) prototypes, and
the size and scale required for many proposed QC algorithms. Current QC
implementations are prone to noise and variability which affect their
reliability, and yet with less than 80 quantum bits (qubits) total, they are
too resource-constrained to implement error correction. The term Noisy
Intermediate-Scale Quantum (NISQ) refers to these current and near-term systems
of 1000 qubits or less. Given NISQ's severe resource constraints, low
reliability, and high variability in physical characteristics such as coherence
time or error rates, it is of pressing importance to map computations onto them
in ways that use resources efficiently and maximize the likelihood of
successful runs.
This paper proposes and evaluates backend compiler approaches to map and
optimize high-level QC programs to execute with high reliability on NISQ
systems with diverse hardware characteristics. Our techniques all start from an
LLVM intermediate representation of the quantum program (such as would be
generated from high-level QC languages like Scaffold) and generate QC
executables runnable on the IBM Q public QC machine. We then use this framework
to implement and evaluate several optimal and heuristic mapping methods. These
methods vary in how they account for the availability of dynamic machine
calibration data, the relative importance of various noise parameters, the
different possible routing strategies, and the relative importance of
compile-time scalability versus runtime success. Using real-system
measurements, we show that fine grained spatial and temporal variations in
hardware parameters can be exploited to obtain an average x (and up to
x) improvement in program success rate over the industry standard IBM
Qiskit compiler.Comment: To appear in ASPLOS'1
A survey of carbon nanotube interconnects for energy efficient integrated circuits
This article is a review of the state-of-art carbon nanotube interconnects for Silicon application with respect to the recent literature. Amongst all the research on carbon nanotube interconnects, those discussed here cover 1) challenges with current copper interconnects, 2) process & growth of carbon nanotube interconnects compatible with back-end-of-line integration, and 3) modeling and simulation for circuit-level benchmarking and performance prediction. The focus is on the evolution of carbon nanotube interconnects from the process, theoretical modeling, and experimental characterization to on-chip interconnect applications. We provide an overview of the current advancements on carbon nanotube interconnects and also regarding the prospects for designing energy efficient integrated circuits. Each selected category is presented in an accessible manner aiming to serve as a survey and informative cornerstone on carbon nanotube interconnects relevant to students and scientists belonging to a range of fields from physics, processing to circuit design
Smart Asset Management for Electric Utilities: Big Data and Future
This paper discusses about future challenges in terms of big data and new
technologies. Utilities have been collecting data in large amounts but they are
hardly utilized because they are huge in amount and also there is uncertainty
associated with it. Condition monitoring of assets collects large amounts of
data during daily operations. The question arises "How to extract information
from large chunk of data?" The concept of "rich data and poor information" is
being challenged by big data analytics with advent of machine learning
techniques. Along with technological advancements like Internet of Things
(IoT), big data analytics will play an important role for electric utilities.
In this paper, challenges are answered by pathways and guidelines to make the
current asset management practices smarter for the future.Comment: 13 pages, 3 figures, Proceedings of 12th World Congress on
Engineering Asset Management (WCEAM) 201
ARM2GC: Succinct Garbled Processor for Secure Computation
We present ARM2GC, a novel secure computation framework based on Yao's
Garbled Circuit (GC) protocol and the ARM processor. It allows users to develop
privacy-preserving applications using standard high-level programming languages
(e.g., C) and compile them using off-the-shelf ARM compilers (e.g., gcc-arm).
The main enabler of this framework is the introduction of SkipGate, an
algorithm that dynamically omits the communication and encryption cost of the
gates whose outputs are independent of the private data. SkipGate greatly
enhances the performance of ARM2GC by omitting costs of the gates associated
with the instructions of the compiled binary, which is known by both parties
involved in the computation. Our evaluation on benchmark functions demonstrates
that ARM2GC not only outperforms the current GC frameworks that support
high-level languages, it also achieves efficiency comparable to the best prior
solutions based on hardware description languages. Moreover, in contrast to
previous high-level frameworks with domain-specific languages and customized
compilers, ARM2GC relies on standard ARM compiler which is rigorously verified
and supports programs written in the standard syntax.Comment: 13 page
- …