21,597 research outputs found
Efficient variational quantum simulator incorporating active error minimisation
One of the key applications for quantum computers will be the simulation of
other quantum systems that arise in chemistry, materials science, etc, in order
to accelerate the process of discovery. It is important to ask: Can this be
achieved using near future quantum processors, of modest size and under
imperfect control, or must it await the more distant era of large-scale
fault-tolerant quantum computing? Here we propose a variational method
involving closely integrated classical and quantum coprocessors. We presume
that all operations in the quantum coprocessor are prone to error. The impact
of such errors is minimised by boosting them artificially and then
extrapolating to the zero-error case. In comparison to a more conventional
optimised Trotterisation technique, we find that our protocol is efficient and
appears to be fundamentally more robust against error accumulation.Comment: 13 pages, 5 figures; typos fixed and small update
Hierarchical surface code for network quantum computing with modules of arbitrary size
The network paradigm for quantum computing involves interconnecting many
modules to form a scalable machine. Typically it is assumed that the links
between modules are prone to noise while operations within modules have
significantly higher fidelity. To optimise fault tolerance in such
architectures we introduce a hierarchical generalisation of the surface code: a
small `patch' of the code exists within each module, and constitutes a single
effective qubit of the logic-level surface code. Errors primarily occur in a
two-dimensional subspace, i.e. patch perimeters extruded over time, and the
resulting noise threshold for inter-module links can exceed ~ 10% even in the
absence of purification. Increasing the number of qubits within each module
decreases the number of qubits necessary for encoding a logical qubit. But this
advantage is relatively modest, and broadly speaking a `fine grained' network
of small modules containing only ~ 8 qubits is competitive in total qubit count
versus a `course' network with modules containing many hundreds of qubits.Comment: 12 pages, 11 figure
A Quasi-Bayesian Perspective to Online Clustering
When faced with high frequency streams of data, clustering raises theoretical
and algorithmic pitfalls. We introduce a new and adaptive online clustering
algorithm relying on a quasi-Bayesian approach, with a dynamic (i.e.,
time-dependent) estimation of the (unknown and changing) number of clusters. We
prove that our approach is supported by minimax regret bounds. We also provide
an RJMCMC-flavored implementation (called PACBO, see
https://cran.r-project.org/web/packages/PACBO/index.html) for which we give a
convergence guarantee. Finally, numerical experiments illustrate the potential
of our procedure
Stabilisers as a design tool for new forms of Lechner-Hauke-Zoller Annealer
In a recent paper Lechner, Hauke and Zoller (LHZ) described a means to
translate a Hamiltonian of spin- particles with 'all-to-all'
interactions into a larger physical lattice with only on-site energies and
local parity constraints. LHZ used this mapping to propose a novel form of
quantum annealing. Here we provide a stabiliser-based formulation within which
we can describe both this prior approach and a wide variety of variants.
Examples include a triangular array supporting all-to-all connectivity, and
moreover arrangements requiring only or spins but providing
interesting bespoke connectivities. Further examples show that arbitrarily high
order logical terms can be efficiently realised, even in a strictly 2D layout.
Our stabilisers can correspond to either even-parity constraints, as in the LHZ
proposal, or as odd-parity constraints. Considering the latter option applied
to the original LHZ layout, we note it may simplify the physical realisation
since the required ancillas are only spin- systems (i.e. qubits,
rather than qutrits) and moreover the interactions are very simple. We make a
preliminary assessment of the impact of this design choices by simulating small
(few-qubit) systems; we find some indications that the new variant may maintain
a larger minimum energy gap during the annealing process.Comment: A dramatically expanded revision: we now show how to use our
stabiliser formulation to construct a wide variety of new physical layouts,
including ones with fewer than Order N^2 spins but custom connectivities, and
a means to achieve higher order coupling even in 2
High threshold distributed quantum computing with three-qubit nodes
In the distributed quantum computing paradigm, well-controlled few-qubit
`nodes' are networked together by connections which are relatively noisy and
failure prone. A practical scheme must offer high tolerance to errors while
requiring only simple (i.e. few-qubit) nodes. Here we show that relatively
modest, three-qubit nodes can support advanced purification techniques and so
offer robust scalability: the infidelity in the entanglement channel may be
permitted to approach 10% if the infidelity in local operations is of order
0.1%. Our tolerance of network noise is therefore a order of magnitude beyond
prior schemes, and our architecture remains robust even in the presence of
considerable decoherence rates (memory errors). We compare the performance with
that of schemes involving nodes of lower and higher complexity. Ion traps, and
NV- centres in diamond, are two highly relevant emerging technologies.Comment: 5 figures, 12 pages in single column format. Revision has more
detailed comparison with prior scheme
- …