433 research outputs found
Energy-Consumption Advantage of Quantum Computation
Energy consumption in solving computational problems has been gaining growing
attention as a part of the performance measures of computers. Quantum
computation is known to offer advantages over classical computation in terms of
various computational resources; however, its advantage in energy consumption
has been challenging to analyze due to the lack of a theoretical foundation to
relate the physical notion of energy and the computer-scientific notion of
complexity for quantum computation with finite computational resources. To
bridge this gap, we introduce a general framework for studying energy
consumption of quantum and classical computation based on a computational model
with a black-box oracle, as conventionally used for studying query complexity
in computational complexity theory. With this framework, we derive an upper
bound of energy consumption of quantum computation with covering all costs,
including those of initialization, control, and quantum error correction; in
particular, our analysis shows an energy-consumption bound for a finite-step
Landauer-erasure protocol, progressing beyond the existing asymptotic bound. We
also develop techniques for proving a lower bound of energy consumption of
classical computation based on the energy-conservation law and the
Landauer-erasure bound; significantly, our lower bound can be gapped away from
zero no matter how energy-efficiently we implement the computation and is free
from the computational hardness assumptions. Based on these general bounds, we
rigorously prove that quantum computation achieves an exponential
energy-consumption advantage over classical computation for Simon's problem.
These results provide a fundamental framework and techniques to explore the
physical meaning of quantum advantage in the query-complexity setting based on
energy consumption, opening an alternative way to study the advantages of
quantum computation.Comment: 36 pages, 3 figure
Learning to Decode the Surface Code with a Recurrent, Transformer-Based Neural Network
Quantum error-correction is a prerequisite for reliable quantum computation.
Towards this goal, we present a recurrent, transformer-based neural network
which learns to decode the surface code, the leading quantum error-correction
code. Our decoder outperforms state-of-the-art algorithmic decoders on
real-world data from Google's Sycamore quantum processor for distance 3 and 5
surface codes. On distances up to 11, the decoder maintains its advantage on
simulated data with realistic noise including cross-talk, leakage, and analog
readout signals, and sustains its accuracy far beyond the 25 cycles it was
trained on. Our work illustrates the ability of machine learning to go beyond
human-designed algorithms by learning from data directly, highlighting machine
learning as a strong contender for decoding in quantum computers
Partial Syndrome Measurement for Hypergraph Product Codes
Hypergraph product codes are a promising avenue to achieving fault-tolerant
quantum computation with constant overhead. When embedding these and other
constant-rate qLDPC codes into 2D, a significant number of nonlocal connections
are required, posing difficulties for some quantum computing architectures. In
this work, we introduce a fault-tolerance scheme that aims to alleviate the
effects of implementing this nonlocality by measuring generators acting on
spatially distant qubits less frequently than those which do not. We
investigate the performance of a simplified version of this scheme, where the
measured generators are randomly selected. When applied to hypergraph product
codes and a modified small-set-flip decoding algorithm, we prove that for a
sufficiently high percentage of generators being measured, a threshold still
exists. We also find numerical evidence that the logical error rate is
exponentially suppressed even when a large constant fraction of generators are
not measured.Comment: 10 pages, 4 figure
Research Philosophy of Modern Cryptography
Proposing novel cryptography schemes (e.g., encryption, signatures, and protocols) is one of the main research goals in modern cryptography. In this paper, based on more than 800 research papers since 1976 that we have surveyed, we introduce the research philosophy of cryptography behind these papers. We use ``benefits and ``novelty as the keywords to introduce the research philosophy of proposing new schemes, assuming that there is already one scheme proposed for a cryptography notion. Next, we introduce how benefits were explored in the literature and we have categorized the methodology into 3 ways for benefits, 6 types of benefits, and 17 benefit areas. As examples, we introduce 40 research strategies within these benefit areas that were invented in the literature. The introduced research strategies have covered most cryptography schemes published in top-tier cryptography conferences
LIPIcs, Volume 261, ICALP 2023, Complete Volume
LIPIcs, Volume 261, ICALP 2023, Complete Volum
Spatially-Coupled QDLPC Codes
Spatially-coupled (SC) codes is a class of convolutional LDPC codes that has
been well investigated in classical coding theory thanks to their high
performance and compatibility with low-latency decoders. We describe toric
codes as quantum counterparts of classical two-dimensional spatially-coupled
(2D-SC) codes, and introduce spatially-coupled quantum LDPC (SC-QLDPC) codes as
a generalization. We use the convolutional structure to represent the parity
check matrix of a 2D-SC code as a polynomial in two indeterminates, and derive
an algebraic condition that is both necessary and sufficient for a 2D-SC code
to be a stabilizer code. This algebraic framework facilitates the construction
of new code families. While not the focus of this paper, we note that small
memory facilitates physical connectivity of qubits, and it enables local
encoding and low-latency windowed decoding. In this paper, we use the algebraic
framework to optimize short cycles in the Tanner graph of 2D-SC HGP codes that
arise from short cycles in either component code. While prior work focuses on
QLDPC codes with rate less than 1/10, we construct 2D-SC HGP codes with small
memory, higher rates (about 1/3), and superior thresholds.Comment: 25 pages, 7 figure
Spherical and Hyperbolic Toric Topology-Based Codes On Graph Embedding for Ising MRF Models: Classical and Quantum Topology Machine Learning
The paper introduces the application of information geometry to describe the
ground states of Ising models by utilizing parity-check matrices of cyclic and
quasi-cyclic codes on toric and spherical topologies. The approach establishes
a connection between machine learning and error-correcting coding. This
proposed approach has implications for the development of new embedding methods
based on trapping sets. Statistical physics and number geometry applied for
optimize error-correcting codes, leading to these embedding and sparse
factorization methods. The paper establishes a direct connection between DNN
architecture and error-correcting coding by demonstrating how state-of-the-art
architectures (ChordMixer, Mega, Mega-chunk, CDIL, ...) from the long-range
arena can be equivalent to of block and convolutional LDPC codes (Cage-graph,
Repeat Accumulate). QC codes correspond to certain types of chemical elements,
with the carbon element being represented by the mixed automorphism
Shu-Lin-Fossorier QC-LDPC code. The connections between Belief Propagation and
the Permanent, Bethe-Permanent, Nishimori Temperature, and Bethe-Hessian Matrix
are elaborated upon in detail. The Quantum Approximate Optimization Algorithm
(QAOA) used in the Sherrington-Kirkpatrick Ising model can be seen as analogous
to the back-propagation loss function landscape in training DNNs. This
similarity creates a comparable problem with TS pseudo-codeword, resembling the
belief propagation method. Additionally, the layer depth in QAOA correlates to
the number of decoding belief propagation iterations in the Wiberg decoding
tree. Overall, this work has the potential to advance multiple fields, from
Information Theory, DNN architecture design (sparse and structured prior graph
topology), efficient hardware design for Quantum and Classical DPU/TPU (graph,
quantize and shift register architect.) to Materials Science and beyond.Comment: 71 pages, 42 Figures, 1 Table, 1 Appendix. arXiv admin note: text
overlap with arXiv:2109.08184 by other author
Radiation Tolerant Electronics, Volume II
Research on radiation tolerant electronics has increased rapidly over the last few years, resulting in many interesting approaches to model radiation effects and design radiation hardened integrated circuits and embedded systems. This research is strongly driven by the growing need for radiation hardened electronics for space applications, high-energy physics experiments such as those on the large hadron collider at CERN, and many terrestrial nuclear applications, including nuclear energy and safety management. With the progressive scaling of integrated circuit technologies and the growing complexity of electronic systems, their ionizing radiation susceptibility has raised many exciting challenges, which are expected to drive research in the coming decade.After the success of the first Special Issue on Radiation Tolerant Electronics, the current Special Issue features thirteen articles highlighting recent breakthroughs in radiation tolerant integrated circuit design, fault tolerance in FPGAs, radiation effects in semiconductor materials and advanced IC technologies and modelling of radiation effects
Data-driven decoding of quantum error correcting codes using graph neural networks
To leverage the full potential of quantum error-correcting stabilizer codes
it is crucial to have an efficient and accurate decoder. Accurate, maximum
likelihood, decoders are computationally very expensive whereas decoders based
on more efficient algorithms give sub-optimal performance. In addition, the
accuracy will depend on the quality of models and estimates of error rates for
idling qubits, gates, measurements, and resets, and will typically assume
symmetric error channels. In this work, instead, we explore a model-free,
data-driven, approach to decoding, using a graph neural network (GNN). The
decoding problem is formulated as a graph classification task in which a set of
stabilizer measurements is mapped to an annotated detector graph for which the
neural network predicts the most likely logical error class. We show that the
GNN-based decoder can outperform a matching decoder for circuit level noise on
the surface code given only simulated experimental data, even if the matching
decoder is given full information of the underlying error model. Although
training is computationally demanding, inference is fast and scales
approximately linearly with the space-time volume of the code. We also find
that we can use large, but more limited, datasets of real experimental data
[Google Quantum AI, Nature {\bf 614}, 676 (2023)] for the repetition code,
giving decoding accuracies that are on par with minimum weight perfect
matching. The results show that a purely data-driven approach to decoding may
be a viable future option for practical quantum error correction, which is
competitive in terms of speed, accuracy, and versatility.Comment: 15 pages, 12 figure
Machine-learning based noise characterization and correction on neutral atoms NISQ devices
Neutral atoms devices represent a promising technology that uses optical
tweezers to geometrically arrange atoms and modulated laser pulses to control
the quantum states. A neutral atoms Noisy Intermediate Scale Quantum (NISQ)
device is developed by Pasqal with rubidium atoms that will allow to work with
up to 100 qubits. All NISQ devices are affected by noise that have an impact on
the computations results. Therefore it is important to better understand and
characterize the noise sources and possibly to correct them. Here, two
approaches are proposed to characterize and correct noise parameters on neutral
atoms NISQ devices. In particular the focus is on Pasqal devices and Machine
Learning (ML) techniques are adopted to pursue those objectives. To
characterize the noise parameters, several ML models are trained, using as
input only the measurements of the final quantum state of the atoms, to predict
laser intensity fluctuation and waist, temperature and false positive and
negative measurement rate. Moreover, an analysis is provided with the scaling
on the number of atoms in the system and on the number of measurements used as
input. Also, we compare on real data the values predicted with ML with the a
priori estimated parameters. Finally, a Reinforcement Learning (RL) framework
is employed to design a pulse in order to correct the effect of the noise in
the measurements. It is expected that the analysis performed in this work will
be useful for a better understanding of the quantum dynamic in neutral atoms
devices and for the widespread adoption of this class of NISQ devices.Comment: 11 pages, 5 figures, 3 table
- …