1,332 research outputs found
The Bose-Hubbard model is QMA-complete
The Bose-Hubbard model is a system of interacting bosons that live on the
vertices of a graph. The particles can move between adjacent vertices and
experience a repulsive on-site interaction. The Hamiltonian is determined by a
choice of graph that specifies the geometry in which the particles move and
interact. We prove that approximating the ground energy of the Bose-Hubbard
model on a graph at fixed particle number is QMA-complete. In our QMA-hardness
proof, we encode the history of an n-qubit computation in the subspace with at
most one particle per site (i.e., hard-core bosons). This feature, along with
the well-known mapping between hard-core bosons and spin systems, lets us prove
a related result for a class of 2-local Hamiltonians defined by graphs that
generalizes the XY model. By avoiding the use of perturbation theory in our
analysis, we circumvent the need to multiply terms in the Hamiltonian by large
coefficients
A new construction for a QMA complete 3-local Hamiltonian
We present a new way of encoding a quantum computation into a 3-local
Hamiltonian. Our construction is novel in that it does not include any terms
that induce legal-illegal clock transitions. Therefore, the weights of the
terms in the Hamiltonian do not scale with the size of the problem as in
previous constructions. This improves the construction by Kempe and Regev, who
were the first to prove that 3-local Hamiltonian is complete for the complexity
class QMA, the quantum analogue of NP.
Quantum k-SAT, a restricted version of the local Hamiltonian problem using
only projector terms, was introduced by Bravyi as an analogue of the classical
k-SAT problem. Bravyi proved that quantum 4-SAT is complete for the class QMA
with one-sided error (QMA_1) and that quantum 2-SAT is in P. We give an
encoding of a quantum circuit into a quantum 4-SAT Hamiltonian using only
3-local terms. As an intermediate step to this 3-local construction, we show
that quantum 3-SAT for particles with dimensions 3x2x2 (a qutrit and two
qubits) is QMA_1 complete. The complexity of quantum 3-SAT with qubits remains
an open question.Comment: 11 pages, 4 figure
Encoded Universality for Generalized Anisotropic Exchange Hamiltonians
We derive an encoded universality representation for a generalized
anisotropic exchange Hamiltonian that contains cross-product terms in addition
to the usual two-particle exchange terms. The recently developed algebraic
approach is used to show that the minimal universality-generating encodings of
one logical qubit are based on three physical qubits. We show how to generate
both single- and two-qubit operations on the logical qubits, using suitably
timed conjugating operations derived from analysis of the commutator algebra.
The timing of the operations is seen to be crucial in allowing simplification
of the gate sequences for the generalized Hamiltonian to forms similar to that
derived previously for the symmetric (XY) anisotropic exchange Hamiltonian. The
total number of operations needed for a controlled-Z gate up to local
transformations is five. A scalable architecture is proposed.Comment: 11 pages, 4 figure
Universal Leakage Elimination
``Leakage'' errors are particularly serious errors which couple states within
a code subspace to states outside of that subspace thus destroying the error
protection benefit afforded by an encoded state. We generalize an earlier
method for producing leakage elimination decoupling operations and examine the
effects of the leakage eliminating operations on decoherence-free or noiseless
subsystems which encode one logical, or protected qubit into three or four
qubits. We find that by eliminating the large class of leakage errors, under
some circumstances, we can create the conditions for a decoherence free
evolution. In other cases we identify a combination decoherence-free and
quantum error correcting code which could eliminate errors in solid-state
qubits with anisotropic exchange interaction Hamiltonians and enable universal
quantum computing with only these interactions.Comment: 14 pages, no figures, new version has references updated/fixe
Overview of Quantum Error Prevention and Leakage Elimination
Quantum error prevention strategies will be required to produce a scalable
quantum computing device and are of central importance in this regard. Progress
in this area has been quite rapid in the past few years. In order to provide an
overview of the achievements in this area, we discuss the three major classes
of error prevention strategies, the abilities of these methods and the
shortcomings. We then discuss the combinations of these strategies which have
recently been proposed in the literature. Finally we present recent results in
reducing errors on encoded subspaces using decoupling controls. We show how to
generally remove mixing of an encoded subspace with external states (termed
leakage errors) using decoupling controls. Such controls are known as ``leakage
elimination operations'' or ``LEOs.''Comment: 8 pages, no figures, submitted to the proceedings of the Physics of
Quantum Electronics, 200
Continuous-time quantum walks on one-dimension regular networks
In this paper, we consider continuous-time quantum walks (CTQWs) on
one-dimension ring lattice of N nodes in which every node is connected to its
2m nearest neighbors (m on either side). In the framework of the Bloch function
ansatz, we calculate the spacetime transition probabilities between two nodes
of the lattice. We find that the transport of CTQWs between two different nodes
is faster than that of the classical continuous-time random walk (CTRWs). The
transport speed, which is defined by the ratio of the shortest path length and
propagating time, increases with the connectivity parameter m for both the
CTQWs and CTRWs. For fixed parameter m, the transport of CTRWs gets slow with
the increase of the shortest distance while the transport (speed) of CTQWs
turns out to be a constant value. In the long time limit, depending on the
network size N and connectivity parameter m, the limiting probability
distributions of CTQWs show various paterns. When the network size N is an even
number, the probability of being at the original node differs from that of
being at the opposite node, which also depends on the precise value of
parameter m.Comment: Typos corrected and Phys. ReV. E comments considered in this versio
How to Spread a Rumor: Call Your Neighbors or Take a Walk?
We study the problem of randomized information dissemination in networks. We
compare the now standard PUSH-PULL protocol, with agent-based alternatives
where information is disseminated by a collection of agents performing
independent random walks. In the VISIT-EXCHANGE protocol, both nodes and agents
store information, and each time an agent visits a node, the two exchange all
the information they have. In the MEET-EXCHANGE protocol, only the agents store
information, and exchange their information with each agent they meet.
We consider the broadcast time of a single piece of information in an
-node graph for the above three protocols, assuming a linear number of
agents that start from the stationary distribution. We observe that there are
graphs on which the agent-based protocols are significantly faster than
PUSH-PULL, and graphs where the converse is true. We attribute the good
performance of agent-based algorithms to their inherently fair bandwidth
utilization, and conclude that, in certain settings, agent-based information
dissemination, separately or in combination with PUSH-PULL, can significantly
improve the broadcast time.
The graphs considered above are highly non-regular. Our main technical result
is that on any regular graph of at least logarithmic degree, PUSH-PULL and
VISIT-EXCHANGE have the same asymptotic broadcast time. The proof uses a novel
coupling argument which relates the random choices of vertices in PUSH-PULL
with the random walks in VISIT-EXCHANGE. Further, we show that the broadcast
time of MEET-EXCHANGE is asymptotically at least as large as the other two's on
all regular graphs, and strictly larger on some regular graphs.
As far as we know, this is the first systematic and thorough comparison of
the running times of these very natural information dissemination protocols.The authors would like to thank Thomas Sauerwald and Nicol\'{a}s Rivera for helpful discussions.
This research was undertaken, in part, thanks to funding from
the ANR Project PAMELA (ANR-16-CE23-0016-01),
the NSF Award Numbers CCF-1461559, CCF-0939370 and CCF-18107,
the Gates Cambridge Scholarship programme,
and the ERC grant DYNAMIC MARCH
Asymptotic entanglement in a two-dimensional quantum walk
The evolution operator of a discrete-time quantum walk involves a conditional
shift in position space which entangles the coin and position degrees of
freedom of the walker. After several steps, the coin-position entanglement
(CPE) converges to a well defined value which depends on the initial state. In
this work we provide an analytical method which allows for the exact
calculation of the asymptotic reduced density operator and the corresponding
CPE for a discrete-time quantum walk on a two-dimensional lattice. We use the
von Neumann entropy of the reduced density operator as an entanglement measure.
The method is applied to the case of a Hadamard walk for which the dependence
of the resulting CPE on initial conditions is obtained. Initial states leading
to maximum or minimum CPE are identified and the relation between the coin or
position entanglement present in the initial state of the walker and the final
level of CPE is discussed. The CPE obtained from separable initial states
satisfies an additivity property in terms of CPE of the corresponding
one-dimensional cases. Non-local initial conditions are also considered and we
find that the extreme case of an initial uniform position distribution leads to
the largest CPE variation.Comment: Major revision. Improved structure. Theoretical results are now
separated from specific examples. Most figures have been replaced by new
versions. The paper is now significantly reduced in size: 11 pages, 7 figure
A reliability-based approach for influence maximization using the evidence theory
The influence maximization is the problem of finding a set of social network
users, called influencers, that can trigger a large cascade of propagation.
Influencers are very beneficial to make a marketing campaign goes viral through
social networks for example. In this paper, we propose an influence measure
that combines many influence indicators. Besides, we consider the reliability
of each influence indicator and we present a distance-based process that allows
to estimate the reliability of each indicator. The proposed measure is defined
under the framework of the theory of belief functions. Furthermore, the
reliability-based influence measure is used with an influence maximization
model to select a set of users that are able to maximize the influence in the
network. Finally, we present a set of experiments on a dataset collected from
Twitter. These experiments show the performance of the proposed solution in
detecting social influencers with good quality.Comment: 14 pages, 8 figures, DaWak 2017 conferenc
Improved Error-Scaling for Adiabatic Quantum State Transfer
We present a technique that dramatically improves the accuracy of adiabatic
state transfer for a broad class of realistic Hamiltonians. For some systems,
the total error scaling can be quadratically reduced at a fixed maximum
transfer rate. These improvements rely only on the judicious choice of the
total evolution time. Our technique is error-robust, and hence applicable to
existing experiments utilizing adiabatic passage. We give two examples as
proofs-of-principle, showing quadratic error reductions for an adiabatic search
algorithm and a tunable two-qubit quantum logic gate.Comment: 10 Pages, 4 figures. Comments are welcome. Version substantially
revised to generalize results to cases where several derivatives of the
Hamiltonian are zero on the boundar
- âŠ