344,983 research outputs found
Optimized Surface Code Communication in Superconducting Quantum Computers
Quantum computing (QC) is at the cusp of a revolution. Machines with 100
quantum bits (qubits) are anticipated to be operational by 2020
[googlemachine,gambetta2015building], and several-hundred-qubit machines are
around the corner. Machines of this scale have the capacity to demonstrate
quantum supremacy, the tipping point where QC is faster than the fastest
classical alternative for a particular problem. Because error correction
techniques will be central to QC and will be the most expensive component of
quantum computation, choosing the lowest-overhead error correction scheme is
critical to overall QC success. This paper evaluates two established quantum
error correction codes---planar and double-defect surface codes---using a set
of compilation, scheduling and network simulation tools. In considering
scalable methods for optimizing both codes, we do so in the context of a full
microarchitectural and compiler analysis. Contrary to previous predictions, we
find that the simpler planar codes are sometimes more favorable for
implementation on superconducting quantum computers, especially under
conditions of high communication congestion.Comment: 14 pages, 9 figures, The 50th Annual IEEE/ACM International Symposium
on Microarchitectur
PopCORN: Hunting down the differences between binary population synthesis codes
Binary population synthesis (BPS) modelling is a very effective tool to study
the evolution and properties of close binary systems. The uncertainty in the
parameters of the model and their effect on a population can be tested in a
statistical way, which then leads to a deeper understanding of the underlying
physical processes involved. To understand the predictive power of BPS codes,
we study the similarities and differences in the predicted populations of four
different BPS codes for low- and intermediate-mass binaries. We investigate
whether the differences are caused by different assumptions made in the BPS
codes or by numerical effects. To simplify the complex problem of comparing BPS
codes, we equalise the inherent assumptions as much as possible. We find that
the simulated populations are similar between the codes. Regarding the
population of binaries with one WD, there is very good agreement between the
physical characteristics, the evolutionary channels that lead to the birth of
these systems, and their birthrates. Regarding the double WD population, there
is a good agreement on which evolutionary channels exist to create double WDs
and a rough agreement on the characteristics of the double WD population.
Regarding which progenitor systems lead to a single and double WD system and
which systems do not, the four codes agree well. Most importantly, we find that
for these two populations, the differences in the predictions from the four
codes are not due to numerical differences, but because of different inherent
assumptions. We identify critical assumptions for BPS studies that need to be
studied in more detail.Comment: 13 pages, +21 pages appendix, 35 figures, accepted for publishing in
A&A, Minor change to match published version, most important the added link
to the website http://www.astro.ru.nl/~silviato/popcorn for more detailed
figures and informatio
Single integrated device for optical CDMA code processing in dual-code environment
We report on the design, fabrication and performance of a matching integrated optical CDMA encoder-decoder pair based on holographic Bragg reflector technology. Simultaneous encoding/decoding operation of two multiple wavelength-hopping time-spreading codes was successfully demonstrated and shown to support two error-free OCDMA links at OC-24. A double-pass scheme was employed in the devices to enable the use of longer code length
ELSI: A Unified Software Interface for Kohn-Sham Electronic Structure Solvers
Solving the electronic structure from a generalized or standard eigenproblem
is often the bottleneck in large scale calculations based on Kohn-Sham
density-functional theory. This problem must be addressed by essentially all
current electronic structure codes, based on similar matrix expressions, and by
high-performance computation. We here present a unified software interface,
ELSI, to access different strategies that address the Kohn-Sham eigenvalue
problem. Currently supported algorithms include the dense generalized
eigensolver library ELPA, the orbital minimization method implemented in
libOMM, and the pole expansion and selected inversion (PEXSI) approach with
lower computational complexity for semilocal density functionals. The ELSI
interface aims to simplify the implementation and optimal use of the different
strategies, by offering (a) a unified software framework designed for the
electronic structure solvers in Kohn-Sham density-functional theory; (b)
reasonable default parameters for a chosen solver; (c) automatic conversion
between input and internal working matrix formats, and in the future (d)
recommendation of the optimal solver depending on the specific problem.
Comparative benchmarks are shown for system sizes up to 11,520 atoms (172,800
basis functions) on distributed memory supercomputing architectures.Comment: 55 pages, 14 figures, 2 table
Parafermions in a Kagome lattice of qubits for topological quantum computation
Engineering complex non-Abelian anyon models with simple physical systems is
crucial for topological quantum computation. Unfortunately, the simplest
systems are typically restricted to Majorana zero modes (Ising anyons). Here we
go beyond this barrier, showing that the parafermion model of
non-Abelian anyons can be realized on a qubit lattice. Our system additionally
contains the Abelian anyons as low-energetic excitations. We
show that braiding of these parafermions with each other and with the
anyons allows the entire Clifford group to be
generated. The error correction problem for our model is also studied in
detail, guaranteeing fault-tolerance of the topological operations. Crucially,
since the non-Abelian anyons are engineered through defect lines rather than as
excitations, non-Abelian error correction is not required. Instead the error
correction problem is performed on the underlying Abelian model, allowing high
noise thresholds to be realized.Comment: 11+10 pages, 14 figures; v2: accepted for publication in Phys. Rev.
X; 4 new figures, performance of phase-gate explained in more detai
A portable platform for accelerated PIC codes and its application to GPUs using OpenACC
We present a portable platform, called PIC_ENGINE, for accelerating
Particle-In-Cell (PIC) codes on heterogeneous many-core architectures such as
Graphic Processing Units (GPUs). The aim of this development is efficient
simulations on future exascale systems by allowing different parallelization
strategies depending on the application problem and the specific architecture.
To this end, this platform contains the basic steps of the PIC algorithm and
has been designed as a test bed for different algorithmic options and data
structures. Among the architectures that this engine can explore, particular
attention is given here to systems equipped with GPUs. The study demonstrates
that our portable PIC implementation based on the OpenACC programming model can
achieve performance closely matching theoretical predictions. Using the Cray
XC30 system, Piz Daint, at the Swiss National Supercomputing Centre (CSCS), we
show that PIC_ENGINE running on an NVIDIA Kepler K20X GPU can outperform the
one on an Intel Sandybridge 8-core CPU by a factor of 3.4
Transformations of High-Level Synthesis Codes for High-Performance Computing
Specialized hardware architectures promise a major step in performance and
energy efficiency over the traditional load/store devices currently employed in
large scale computing systems. The adoption of high-level synthesis (HLS) from
languages such as C/C++ and OpenCL has greatly increased programmer
productivity when designing for such platforms. While this has enabled a wider
audience to target specialized hardware, the optimization principles known from
traditional software design are no longer sufficient to implement
high-performance codes. Fast and efficient codes for reconfigurable platforms
are thus still challenging to design. To alleviate this, we present a set of
optimizing transformations for HLS, targeting scalable and efficient
architectures for high-performance computing (HPC) applications. Our work
provides a toolbox for developers, where we systematically identify classes of
transformations, the characteristics of their effect on the HLS code and the
resulting hardware (e.g., increases data reuse or resource consumption), and
the objectives that each transformation can target (e.g., resolve interface
contention, or increase parallelism). We show how these can be used to
efficiently exploit pipelining, on-chip distributed fast memory, and on-chip
streaming dataflow, allowing for massively parallel architectures. To quantify
the effect of our transformations, we use them to optimize a set of
throughput-oriented FPGA kernels, demonstrating that our enhancements are
sufficient to scale up parallelism within the hardware constraints. With the
transformations covered, we hope to establish a common framework for
performance engineers, compiler developers, and hardware developers, to tap
into the performance potential offered by specialized hardware architectures
using HLS
Algebraic Hybrid Satellite-Terrestrial Space-Time Codes for Digital Broadcasting in SFN
Lately, different methods for broadcasting future digital TV in a single
frequency network (SFN) have been under an intensive study. To improve the
transmission to also cover suburban and rural areas, a hybrid scheme may be
used. In hybrid transmission, the signal is transmitted both from a satellite
and from a terrestrial site. In 2008, Y. Nasser et al. proposed to use a double
layer 3D space-time (ST) code in the hybrid 4 x 2 MIMO transmission of digital
TV. In this paper, alternative codes with simpler structure are proposed for
the 4 x 2 hybrid system, and new codes are constructed for the 3 x 2 system.
The performance of the proposed codes is analyzed through computer simulations,
showing a significant improvement over simple repetition schemes. The proposed
codes prove in addition to be very robust in the presence of power imbalance
between the two sites.Comment: 14 pages, 2 figures, submitted to ISIT 201
- …