7 research outputs found
(Quantum) complexity of testing signed graph clusterability
This study examines clusterability testing for a signed graph in the
bounded-degree model. Our contributions are two-fold. First, we provide a
quantum algorithm with query complexity for testing
clusterability, which yields a polynomial speedup over the best classical
clusterability tester known [arXiv:2102.07587]. Second, we prove an
classical query lower bound for testing
clusterability, which nearly matches the upper bound from [arXiv:2102.07587].
This settles the classical query complexity of clusterability testing, and it
shows that our quantum algorithm has an advantage over any classical algorithm
Testing identity of collections of quantum states: sample complexity analysis
We study the problem of testing identity of a collection of unknown quantum
states given sample access to this collection, each state appearing with some
known probability. We show that for a collection of -dimensional quantum
states of cardinality , the sample complexity is ,
which is optimal up to a constant. The test is obtained by estimating the mean
squared Hilbert-Schmidt distance between the states, thanks to a suitable
generalization of the estimator of the Hilbert-Schmidt distance between two
unknown states by B\u{a}descu, O'Donnell, and Wright
(https://dl.acm.org/doi/10.1145/3313276.3316344).Comment: 20+6 pages, 0 figures. Typos corrected, improved presentatio
Improved quantum data analysis
We provide more sample-efficient versions of some basic routines in quantum
data analysis, along with simpler proofs. Particularly, we give a quantum
"Threshold Search" algorithm that requires only
samples of a -dimensional state . That is, given observables such that for at
least one , the algorithm finds with . As a consequence, we obtain a Shadow Tomography algorithm
requiring only samples, which
simultaneously achieves the best known dependence on each parameter , ,
. This yields the same sample complexity for quantum Hypothesis
Selection among states; we also give an alternative Hypothesis Selection
method using samples
Learning Quantum States Without Entangled Measurements
How many samples of a quantum state are required to learn a complete description of it? As we will see in this thesis, the fine-grained answer depends on the measurements available to the learner, but in general it is at least Ω(d^2/ϵ^2) where d is the dimension of the state and ϵ the trace distance accuracy. Optimal algorithms for this task -- known as quantum state tomography -- make use of powerful, yet highly impractical entangled measurements, where some joint measurement is performed on all copies of the state. What can be accomplished without such measurements, where one must perform measurements on individual copies of the states?
In Chapter 2 we show a relationship between the recently proposed quantum online learning framework and quantum state tomography. Specifically, we show that tomography can be accomplished using online learning algorithms in a black-box manner and O(d^4/ϵ^4) two-outcome measurements on separate copies of the state. The interpretation of this approach is that the experimentalist uses informative measurements to teach the learner by helping it make "mistakes" on measurements as early as possible.
We move on to proving lower bounds on tomography in Chapter 3. First, we review a known lower bound for entangled measurements as well as a Ω(d^3/ϵ^2) lower bound in the setting where non-entangled measurements are made non-adaptively, both due to Ref.[18]. We then derive a novel bound of Ω(d^4/ϵ^2) samples when the learner is further restricted to observing a constant number of outcomes (e.g., two-outcome measurements). This implies that the folklore "Pauli tomography" algorithm is optimal in this setting.
Understanding the power of adaptive measurements, where measurement choices can depend on previous outcomes, is currently an open problem. In Chapter 4 we present two scenarios in which adapting on previous outcomes makes no difference to the number of samples required. In the first, the learner is limited to adapting on at most o(d^2/ϵ^2) of the previous outcomes. In the second, measurements are drawn from some set of at most exp(O(d)) measurements. In particular, this second lower bound implies that adaptivity makes no difference in the regime of efficiently implementable measurements, in the context of quantum computing.
Finally, we apply the above technique to the problems of classical shadows and shadow tomography to obtain similar lower bounds. Here, one is interested only in determining the expectations of some fixed set of observables. We once again find that, for the worst-case input of observables, adaptivity makes no difference to the sample complexity when considering efficient, non-entangled measurements. As a corollary, we find a straightforward algorithm for shadow tomography is optimal in this setting