5,548 research outputs found
Optimized Surface Code Communication in Superconducting Quantum Computers
Quantum computing (QC) is at the cusp of a revolution. Machines with 100
quantum bits (qubits) are anticipated to be operational by 2020
[googlemachine,gambetta2015building], and several-hundred-qubit machines are
around the corner. Machines of this scale have the capacity to demonstrate
quantum supremacy, the tipping point where QC is faster than the fastest
classical alternative for a particular problem. Because error correction
techniques will be central to QC and will be the most expensive component of
quantum computation, choosing the lowest-overhead error correction scheme is
critical to overall QC success. This paper evaluates two established quantum
error correction codes---planar and double-defect surface codes---using a set
of compilation, scheduling and network simulation tools. In considering
scalable methods for optimizing both codes, we do so in the context of a full
microarchitectural and compiler analysis. Contrary to previous predictions, we
find that the simpler planar codes are sometimes more favorable for
implementation on superconducting quantum computers, especially under
conditions of high communication congestion.Comment: 14 pages, 9 figures, The 50th Annual IEEE/ACM International Symposium
on Microarchitectur
Radiation safety based on the sky shine effect in reactor
In the reactor operation, neutrons and gamma rays are the most dominant radiation.
As protection, lead and concrete shields are built around the reactor. However, the radiation
can penetrate the water shielding inside the reactor pool. This incident leads to the occurrence
of sky shine where a physical phenomenon of nuclear radiation sources was transmitted
panoramic that extends to the environment. The effect of this phenomenon is caused by the
fallout radiation into the surrounding area which causes the radiation dose to increase. High
doses of exposure cause a person to have stochastic effects or deterministic effects. Therefore,
this study was conducted to measure the radiation dose from sky shine effect that scattered
around the reactor at different distances and different height above the reactor platform. In this
paper, the analysis of the radiation dose of sky shine effect was measured using the
experimental metho
APEnet+: high bandwidth 3D torus direct network for petaflops scale commodity clusters
We describe herein the APElink+ board, a PCIe interconnect adapter featuring
the latest advances in wire speed and interface technology plus hardware
support for a RDMA programming model and experimental acceleration of GPU
networking; this design allows us to build a low latency, high bandwidth PC
cluster, the APEnet+ network, the new generation of our cost-effective,
tens-of-thousands-scalable cluster network architecture. Some test results and
characterization of data transmission of a complete testbench, based on a
commercial development card mounting an Altera FPGA, are provided.Comment: 6 pages, 7 figures, proceeding of CHEP 2010, Taiwan, October 18-2
Self-stabilising Byzantine Clock Synchronisation is Almost as Easy as Consensus
We give fault-tolerant algorithms for establishing synchrony in distributed
systems in which each of the nodes has its own clock. Our algorithms
operate in a very strong fault model: we require self-stabilisation, i.e., the
initial state of the system may be arbitrary, and there can be up to
ongoing Byzantine faults, i.e., nodes that deviate from the protocol in an
arbitrary manner. Furthermore, we assume that the local clocks of the nodes may
progress at different speeds (clock drift) and communication has bounded delay.
In this model, we study the pulse synchronisation problem, where the task is to
guarantee that eventually all correct nodes generate well-separated local pulse
events (i.e., unlabelled logical clock ticks) in a synchronised manner.
Compared to prior work, we achieve exponential improvements in stabilisation
time and the number of communicated bits, and give the first sublinear-time
algorithm for the problem:
- In the deterministic setting, the state-of-the-art solutions stabilise in
time and have each node broadcast bits per time
unit. We exponentially reduce the number of bits broadcasted per time unit to
while retaining the same stabilisation time.
- In the randomised setting, the state-of-the-art solutions stabilise in time
and have each node broadcast bits per time unit. We
exponentially reduce the stabilisation time to while each node
broadcasts bits per time unit.
These results are obtained by means of a recursive approach reducing the
above task of self-stabilising pulse synchronisation in the bounded-delay model
to non-self-stabilising binary consensus in the synchronous model. In general,
our approach introduces at most logarithmic overheads in terms of stabilisation
time and broadcasted bits over the underlying consensus routine.Comment: 54 pages. To appear in JACM, preliminary version of this work has
appeared in DISC 201
Freely Scalable Quantum Technologies using Cells of 5-to-50 Qubits with Very Lossy and Noisy Photonic Links
Exquisite quantum control has now been achieved in small ion traps, in
nitrogen-vacancy centres and in superconducting qubit clusters. We can regard
such a system as a universal cell with diverse technological uses from
communication to large-scale computing, provided that the cell is able to
network with others and overcome any noise in the interlinks. Here we show that
loss-tolerant entanglement purification makes quantum computing feasible with
the noisy and lossy links that are realistic today: With a modestly complex
cell design, and using a surface code protocol with a network noise threshold
of 13.3%, we find that interlinks which attempt entanglement at a rate of 2MHz
but suffer 98% photon loss can result in kilohertz computer clock speeds (i.e.
rate of high fidelity stabilizer measurements). Improved links would
dramatically increase the clock speed. Our simulations employed local gates of
a fidelity already achieved in ion trap devices.Comment: corrected typos, additional references, additional figur
Fault-tolerant vertical link design for effective 3D stacking
[EN] Recently, 3D stacking has been proposed to alleviate the memory bandwidth limitation arising in chip multiprocessors
(CMPs). As the number of integrated cores in the chip increases the access to external memory becomes the bottleneck, thus
demanding larger memory amounts inside the chip. The most accepted solution to implement vertical links between stacked dies
is by using Through Silicon Vias (TSVs). However, TSVs are exposed to misalignment and random defects compromising the yield of
the manufactured 3D chip. A common solution to this problem is by over-provisioning, thus impacting on area and cost. In this paper,
we propose a fault-tolerant vertical link design. With its adoption, fault-tolerant vertical links can be implemented in a 3D chip design
at low cost without the need of adding redundant TSVs (no over-provision). Preliminary results are very promising as the fault-tolerant
vertical link design increases switch area only by 6.69% while the achieved interconnect yield tends to 100%.This work was supported by the Spanish MEC and MICINN, as well as European Comission FEDER funds, under Grants CSD2006-00046 and TIN2009-14475-C04. It was also partly supported by the project NaNoC (project label 248972) which is funded by the European Commission within the Research Programme FP7.Hernández Luz, C.; Roca Pérez, A.; Flich Cardo, J.; Silla Jiménez, F.; Duato Marín, JF. (2011). Fault-tolerant vertical link design for effective 3D stacking. IEEE Computer Architecture Letters. 10(2):41-44. https://doi.org/10.1109/L-CA.2011.17S414410
- …