477 research outputs found
Efficient solvability of Hamiltonians and limits on the power of some quantum computational models
We consider quantum computational models defined via a Lie-algebraic theory.
In these models, specified initial states are acted on by Lie-algebraic quantum
gates and the expectation values of Lie algebra elements are measured at the
end. We show that these models can be efficiently simulated on a classical
computer in time polynomial in the dimension of the algebra, regardless of the
dimension of the Hilbert space where the algebra acts. Similar results hold for
the computation of the expectation value of operators implemented by a
gate-sequence. We introduce a Lie-algebraic notion of generalized mean-field
Hamiltonians and show that they are efficiently ("exactly") solvable by means
of a Jacobi-like diagonalization method. Our results generalize earlier ones on
fermionic linear optics computation and provide insight into the source of the
power of the conventional model of quantum computation.Comment: 6 pages; no figure
Tripartite to Bipartite Entanglement Transformations and Polynomial Identity Testing
We consider the problem of deciding if a given three-party entangled pure
state can be converted, with a non-zero success probability, into a given
two-party pure state through local quantum operations and classical
communication. We show that this question is equivalent to the well-known
computational problem of deciding if a multivariate polynomial is identically
zero. Efficient randomized algorithms developed to study the latter can thus be
applied to the question of tripartite to bipartite entanglement
transformations
Classical simulation of noninteracting-fermion quantum circuits
We show that a class of quantum computations that was recently shown to be
efficiently simulatable on a classical computer by Valiant corresponds to a
physical model of noninteracting fermions in one dimension. We give an
alternative proof of his result using the language of fermions and extend the
result to noninteracting fermions with arbitrary pairwise interactions, where
gates can be conditioned on outcomes of complete von Neumann measurements in
the computational basis on other fermionic modes in the circuit. This last
result is in remarkable contrast with the case of noninteracting bosons where
universal quantum computation can be achieved by allowing gates to be
conditioned on classical bits (quant-ph/0006088).Comment: 26 pages, 1 figure, uses wick.sty; references added to recent results
by E. Knil
Improved Simulation of Stabilizer Circuits
The Gottesman-Knill theorem says that a stabilizer circuit -- that is, a
quantum circuit consisting solely of CNOT, Hadamard, and phase gates -- can be
simulated efficiently on a classical computer. This paper improves that theorem
in several directions. First, by removing the need for Gaussian elimination, we
make the simulation algorithm much faster at the cost of a factor-2 increase in
the number of bits needed to represent a state. We have implemented the
improved algorithm in a freely-available program called CHP
(CNOT-Hadamard-Phase), which can handle thousands of qubits easily. Second, we
show that the problem of simulating stabilizer circuits is complete for the
classical complexity class ParityL, which means that stabilizer circuits are
probably not even universal for classical computation. Third, we give efficient
algorithms for computing the inner product between two stabilizer states,
putting any n-qubit stabilizer circuit into a "canonical form" that requires at
most O(n^2/log n) gates, and other useful tasks. Fourth, we extend our
simulation algorithm to circuits acting on mixed states, circuits containing a
limited number of non-stabilizer gates, and circuits acting on general
tensor-product initial states but containing only a limited number of
measurements.Comment: 15 pages. Final version with some minor updates and corrections.
Software at http://www.scottaaronson.com/ch
Set Similarity Search for Skewed Data
Set similarity join, as well as the corresponding indexing problem set
similarity search, are fundamental primitives for managing noisy or uncertain
data. For example, these primitives can be used in data cleaning to identify
different representations of the same object. In many cases one can represent
an object as a sparse 0-1 vector, or equivalently as the set of nonzero entries
in such a vector. A set similarity join can then be used to identify those
pairs that have an exceptionally large dot product (or intersection, when
viewed as sets). We choose to focus on identifying vectors with large Pearson
correlation, but results extend to other similarity measures. In particular, we
consider the indexing problem of identifying correlated vectors in a set S of
vectors sampled from {0,1}^d. Given a query vector y and a parameter alpha in
(0,1), we need to search for an alpha-correlated vector x in a data structure
representing the vectors of S. This kind of similarity search has been
intensely studied in worst-case (non-random data) settings.
Existing theoretically well-founded methods for set similarity search are
often inferior to heuristics that take advantage of skew in the data
distribution, i.e., widely differing frequencies of 1s across the d dimensions.
The main contribution of this paper is to analyze the set similarity problem
under a random data model that reflects the kind of skewed data distributions
seen in practice, allowing theoretical results much stronger than what is
possible in worst-case settings. Our indexing data structure is a recursive,
data-dependent partitioning of vectors inspired by recent advances in set
similarity search. Previous data-dependent methods do not seem to allow us to
exploit skew in item frequencies, so we believe that our work sheds further
light on the power of data dependence
On the Usability of Probably Approximately Correct Implication Bases
We revisit the notion of probably approximately correct implication bases
from the literature and present a first formulation in the language of formal
concept analysis, with the goal to investigate whether such bases represent a
suitable substitute for exact implication bases in practical use-cases. To this
end, we quantitatively examine the behavior of probably approximately correct
implication bases on artificial and real-world data sets and compare their
precision and recall with respect to their corresponding exact implication
bases. Using a small example, we also provide qualitative insight that
implications from probably approximately correct bases can still represent
meaningful knowledge from a given data set.Comment: 17 pages, 8 figures; typos added, corrected x-label on graph
Balancing Bounded Treewidth Circuits
Algorithmic tools for graphs of small treewidth are used to address questions
in complexity theory. For both arithmetic and Boolean circuits, it is shown
that any circuit of size and treewidth can be
simulated by a circuit of width and size , where , if , and otherwise. For our main construction,
we prove that multiplicatively disjoint arithmetic circuits of size
and treewidth can be simulated by bounded fan-in arithmetic formulas of
depth . From this we derive the analogous statement for
syntactically multilinear arithmetic circuits, which strengthens a theorem of
Mahajan and Rao. As another application, we derive that constant width
arithmetic circuits of size can be balanced to depth ,
provided certain restrictions are made on the use of iterated multiplication.
Also from our main construction, we derive that Boolean bounded fan-in circuits
of size and treewidth can be simulated by bounded fan-in
formulas of depth . This strengthens in the non-uniform setting
the known inclusion that . Finally, we apply our
construction to show that {\sc reachability} for directed graphs of bounded
treewidth is in
Thermodynamics of Mesoscopic Vortex Systems in 1+1 Dimensions
The thermodynamics of a disordered planar vortex array is studied numerically
using a new polynomial algorithm which circumvents slow glassy dynamics. Close
to the glass transition, the anomalous vortex displacement is found to agree
well with the prediction of the renormalization-group theory. Interesting
behaviors such as the universal statistics of magnetic susceptibility
variations are observed in both the dense and dilute regimes of this mesoscopic
vortex system.Comment: 4 pages, REVTEX, 6 figures included. Comments and suggestions can be
sent to [email protected]
- …