180,898 research outputs found
Phase-Sensitive Quantum Measurement without Controlled Operations
Many quantum algorithms rely on the measurement of complex quantum
amplitudes. Standard approaches to obtain the phase information, such as the
Hadamard test, give rise to large overheads due to the need for global
controlled-unitary operations. We introduce a quantum algorithm based on
complex analysis that overcomes this problem for amplitudes that are a
continuous function of time. Our method only requires the implementation of
real-time evolution and a shallow circuit that approximates a short
imaginary-time evolution. We show that the method outperforms the Hadamard test
in terms of circuit depth and that it is suitable for current noisy quantum
computers when combined with a simple error-mitigation strategy
Randomized semi-quantum matrix processing
Quantum computers have the potential for game-changing runtime speed-ups for
important matrix-arithmetic problems. A prominent toolbox for that is the
quantum singular-value transformation (QSVT) formalism in the setting of
coherent access to the input matrix via a unitary block encoding and Chebyshev
approximations to a target matrix function. Nonetheless, physical
implementations for useful end-user applications require large-scale
fault-tolerant quantum computers. Here, we present a hybrid quantum-classical
framework for Monte Carlo simulation of generic matrix functions tailored to
early fault-tolerant quantum hardware. Our algorithms randomize over the
Chebyshev polynomials but keep the matrix oracle quantum, and are assisted by a
variant of the Hadamard test that removes the need for post-selection. As a
result, they feature a similar statistical overhead to the fully-quantum case
of standard QSVT and do not incur any degradation in circuit depth. On the
contrary, the average circuit depth is significantly smaller. We apply our
technique to four specific use cases: partition-function estimation via quantum
Markov-chain Monte Carlo and via imaginary-time evolution; end-to-end linear
system solvers; and ground-state energy estimation. For these cases, we prove
significant advantages of average over maximal depths, including quadratic
speed-ups on costly parameters and even the removal of an approximation-error
dependence. These translate into equivalent reductions of noise sensitivity,
because the detrimental effect of noise scales with the average (and not the
maximal) query depth, as we explicitly show for depolarizing noise and coherent
errors. All in all, our framework provides a practical pathway towards early
fault-tolerant quantum linear-algebra applications.Comment: 10 pages of main text, 10 pages of appendices; the appendices are in
a preliminary version; comments are welcom
Towards quantum 3d imaging devices
We review the advancement of the research toward the design and implementation of quantum plenoptic cameras, radically novel 3D imaging devices that exploit both momentum–position entanglement and photon–number correlations to provide the typical refocusing and ultra-fast, scanning-free, 3D imaging capability of plenoptic devices, along with dramatically enhanced performances, unattainable in standard plenoptic cameras: diffraction-limited resolution, large depth of focus, and ultra-low noise. To further increase the volumetric resolution beyond the Rayleigh diffraction limit, and achieve the quantum limit, we are also developing dedicated protocols based on quantum Fisher information. However, for the quantum advantages of the proposed devices to be effective and appealing to end-users, two main challenges need to be tackled. First, due to the large number of frames required for correlation measurements to provide an acceptable signal-to-noise ratio, quantum plenoptic imaging (QPI) would require, if implemented with commercially available high-resolution cameras, acquisition times ranging from tens of seconds to a few minutes. Second, the elaboration of this large amount of data, in order to retrieve 3D images or refocusing 2D images, requires high-performance and time-consuming computation. To address these challenges, we are developing high-resolution single-photon avalanche photodiode (SPAD) arrays and high-performance low-level programming of ultra-fast electronics, combined with compressive sensing and quantum tomography algorithms, with the aim to reduce both the acquisition and the elaboration time by two orders of magnitude. Routes toward exploitation of the QPI devices will also be discussed
A Hybrid Quantum-Classical Paradigm to Mitigate Embedding Costs in Quantum Annealing
Despite rapid recent progress towards the development of quantum computers
capable of providing computational advantages over classical computers, it
seems likely that such computers will, initially at least, be required to run
in a hybrid quantum-classical regime. This realisation has led to interest in
hybrid quantum-classical algorithms allowing, for example, quantum computers to
solve large problems despite having very limited numbers of qubits. Here we
propose a hybrid paradigm for quantum annealers with the goal of mitigating a
different limitation of such devices: the need to embed problem instances
within the (often highly restricted) connectivity graph of the annealer. This
embedding process can be costly to perform and may destroy any computational
speedup. In order to solve many practical problems, it is moreover necessary to
perform many, often related, such embeddings. We will show how, for such
problems, a raw speedup that is negated by the embedding time can nonetheless
be exploited to give a real speedup. As a proof-of-concept example we present
an in-depth case study of a simple problem based on the maximum weight
independent set problem. Although we do not observe a quantum speedup
experimentally, the advantage of the hybrid approach is robustly verified,
showing how a potential quantum speedup may be exploited and encouraging
further efforts to apply the approach to problems of more practical interest.Comment: 30 pages, 6 figure
- …