18,537 research outputs found
Cryogenic Memory Technologies
The surging interest in quantum computing, space electronics, and
superconducting circuits has led to new developments in cryogenic data storage
technology. Quantum computers promise to far extend our processing capabilities
and may allow solving currently intractable computational challenges. Even with
the advent of the quantum computing era, ultra-fast and energy-efficient
classical computing systems are still in high demand. One of the classical
platforms that can achieve this dream combination is superconducting single
flux quantum (SFQ) electronics. A major roadblock towards implementing scalable
quantum computers and practical SFQ circuits is the lack of suitable and
compatible cryogenic memory that can operate at 4 Kelvin (or lower)
temperature. Cryogenic memory is also critically important in space-based
applications. A multitude of device technologies have already been explored to
find suitable candidates for cryogenic data storage. Here, we review the
existing and emerging variants of cryogenic memory technologies. To ensure an
organized discussion, we categorize the family of cryogenic memory platforms
into three types: superconducting, non-superconducting, and hybrid. We
scrutinize the challenges associated with these technologies and discuss their
future prospects.Comment: 21 pages, 6 figures, 1 tabl
End-to-end complexity for simulating the Schwinger model on quantum computers
The Schwinger model is one of the simplest gauge theories. It is known that a
topological term of the model leads to the infamous sign problem in the
classical Monte Carlo method. In contrast to this, recently, quantum computing
in Hamiltonian formalism has gained attention. In this work, we estimate the
resources needed for quantum computers to compute physical quantities that are
challenging to compute on classical computers. Specifically, we propose an
efficient implementation of block-encoding of the Schwinger model Hamiltonian.
Considering the structure of the Hamiltonian, this block-encoding with a
normalization factor of can be implemented using
T gates. As an end-to-end application,
we compute the vacuum persistence amplitude. As a result, we found that for a
system size and an additive error , with an evolution
time and a lattice spacing a satisfying , the vacuum persistence
amplitude can be calculated using about T gates. Our results provide
insights into predictions about the performance of quantum computers in the
FTQC and early FTQC era, clarifying the challenges in solving meaningful
problems within a realistic timeframe.Comment: 29 pages, 16 figure
Factorization in Cybersecurity: a Dual Role of Defense and Vulnerability in the Age of Quantum Computing
One of the most critical components of modern cryptography and thus cybersecurity is the ability to factor large integers quickly and efficiently. RSA encryption, one of the most used types, is based largely on the assumption that factoring for large numbers is computationally infeasible for humans and computers alike. However, with quantum computers, people can use an algorithm like Shor’s algorithm to perform the same task exponentially faster than any normal device ever could. This investigation will go into the strength and vulnerability of RSA encryption using the power of factorization in an age of quantum computers.We start by looking at the foundations of both classical and quantum factoring with greater detail at number field sieve (NFS) and Shor’s. We examine the mathematical background of each topic and the associated algorithms. We conclude with theoretical analysis and experimental simulations that address the difficulty and implications of the above-mentioned algorithms in cryptography. The final thing that I will be discussing is where quantum computing is at present and how this could pose a threat to the current type of cryptographic systems, we use every day. I will be mentioning how we need post-quantum cryptography and how people are currently creating algorithms that are designed to be attack-resistant even to large-scale quantum computers. This investigation has shown the changing dynamics of cybersecurity in the quantum era and helps us understand the challenges and the need to innovate the current cryptographic systems
The NISQ Complexity of Collision Finding
Collision-resistant hashing, a fundamental primitive in modern cryptography,
ensures that there is no efficient way to find distinct inputs that produce the
same hash value. This property underpins the security of various cryptographic
applications, making it crucial to understand its complexity. The complexity of
this problem is well-understood in the classical setting and
queries are needed to find a collision. However, the advent of quantum
computing has introduced new challenges since quantum adversaries
\unicode{x2013} equipped with the power of quantum queries \unicode{x2013}
can find collisions much more efficiently. Brassard, H\"oyer and Tapp and
Aaronson and Shi established that full-scale quantum adversaries require
queries to find a collision, prompting a need for longer hash
outputs, which impacts efficiency in terms of the key lengths needed for
security.
This paper explores the implications of quantum attacks in the
Noisy-Intermediate Scale Quantum (NISQ) era. In this work, we investigate three
different models for NISQ algorithms and achieve tight bounds for all of them:
(1) A hybrid algorithm making adaptive quantum or classical queries but with
a limited quantum query budget, or
(2) A quantum algorithm with access to a noisy oracle, subject to a dephasing
or depolarizing channel, or
(3) A hybrid algorithm with an upper bound on its maximum quantum depth;
i.e., a classical algorithm aided by low-depth quantum circuits.
In fact, our results handle all regimes between NISQ and full-scale quantum
computers. Previously, only results for the pre-image search problem were known
for these models by Sun and Zheng, Rosmanis, Chen, Cotler, Huang and Li while
nothing was known about the collision finding problem.Comment: 40 pages; v2: title changed, major extension to other complexity
model
Realizing Stabilized Landing for Computation-Limited Reusable Rockets: A Quantum Reinforcement Learning Approach
The advent of reusable rockets has heralded a new era in space exploration,
reducing the costs of launching satellites by a significant factor. Traditional
rockets were disposable, but the design of reusable rockets for repeated use
has revolutionized the financial dynamics of space missions. The most critical
phase of reusable rockets is the landing stage, which involves managing the
tremendous speed and attitude for safe recovery. The complexity of this task
presents new challenges for control systems, specifically in terms of precision
and adaptability. Classical control systems like the
proportional-integral-derivative (PID) controller lack the flexibility to adapt
to dynamic system changes, making them costly and time-consuming to redesign of
controller. This paper explores the integration of quantum reinforcement
learning into the control systems of reusable rockets as a promising
alternative. Unlike classical reinforcement learning, quantum reinforcement
learning uses quantum bits that can exist in superposition, allowing for more
efficient information encoding and reducing the number of parameters required.
This leads to increased computational efficiency, reduced memory requirements,
and more stable and predictable performance. Due to the nature of reusable
rockets, which must be light, heavy computers cannot fit into them. In the
reusable rocket scenario, quantum reinforcement learning, which has reduced
memory requirements due to fewer parameters, is a good solution.Comment: 5 pages, 5 figure
Network Community Detection On Small Quantum Computers
In recent years a number of quantum computing devices with small numbers of
qubits became available. We present a hybrid quantum local search (QLS)
approach that combines a classical machine and a small quantum device to solve
problems of practical size. The proposed approach is applied to the network
community detection problem. QLS is hardware-agnostic and easily extendable to
new quantum computing devices as they become available. We demonstrate it to
solve the 2-community detection problem on graphs of size up to 410 vertices
using the 16-qubit IBM quantum computer and D-Wave 2000Q, and compare their
performance with the optimal solutions. Our results demonstrate that QLS
perform similarly in terms of quality of the solution and the number of
iterations to convergence on both types of quantum computers and it is capable
of achieving results comparable to state-of-the-art solvers in terms of quality
of the solution including reaching the optimal solutions
Quantum Computing in the NISQ era and beyond
Noisy Intermediate-Scale Quantum (NISQ) technology will be available in the
near future. Quantum computers with 50-100 qubits may be able to perform tasks
which surpass the capabilities of today's classical digital computers, but
noise in quantum gates will limit the size of quantum circuits that can be
executed reliably. NISQ devices will be useful tools for exploring many-body
quantum physics, and may have other useful applications, but the 100-qubit
quantum computer will not change the world right away --- we should regard it
as a significant step toward the more powerful quantum technologies of the
future. Quantum technologists should continue to strive for more accurate
quantum gates and, eventually, fully fault-tolerant quantum computing.Comment: 20 pages. Based on a Keynote Address at Quantum Computing for
Business, 5 December 2017. (v3) Formatted for publication in Quantum, minor
revision
Practical cryptographic strategies in the post-quantum era
We review new frontiers in information security technologies in
communications and distributed storage technologies with the use of classical,
quantum, hybrid classical-quantum, and post-quantum cryptography. We analyze
the current state-of-the-art, critical characteristics, development trends, and
limitations of these techniques for application in enterprise information
protection systems. An approach concerning the selection of practical
encryption technologies for enterprises with branched communication networks is
introduced.Comment: 5 pages, 2 figures; review pape
- …