7 research outputs found

    (Quantum) complexity of testing signed graph clusterability

    Full text link
    This study examines clusterability testing for a signed graph in the bounded-degree model. Our contributions are two-fold. First, we provide a quantum algorithm with query complexity O~(N1/3)\tilde{O}(N^{1/3}) for testing clusterability, which yields a polynomial speedup over the best classical clusterability tester known [arXiv:2102.07587]. Second, we prove an Ω~(N)\tilde{\Omega}(\sqrt{N}) classical query lower bound for testing clusterability, which nearly matches the upper bound from [arXiv:2102.07587]. This settles the classical query complexity of clusterability testing, and it shows that our quantum algorithm has an advantage over any classical algorithm

    Testing identity of collections of quantum states: sample complexity analysis

    Get PDF
    We study the problem of testing identity of a collection of unknown quantum states given sample access to this collection, each state appearing with some known probability. We show that for a collection of dd-dimensional quantum states of cardinality NN, the sample complexity is O(Nd/ϵ2)O(\sqrt{N}d/\epsilon^2), which is optimal up to a constant. The test is obtained by estimating the mean squared Hilbert-Schmidt distance between the states, thanks to a suitable generalization of the estimator of the Hilbert-Schmidt distance between two unknown states by B\u{a}descu, O'Donnell, and Wright (https://dl.acm.org/doi/10.1145/3313276.3316344).Comment: 20+6 pages, 0 figures. Typos corrected, improved presentatio

    Improved quantum data analysis

    Full text link
    We provide more sample-efficient versions of some basic routines in quantum data analysis, along with simpler proofs. Particularly, we give a quantum "Threshold Search" algorithm that requires only O((log2m)/ϵ2)O((\log^2 m)/\epsilon^2) samples of a dd-dimensional state ρ\rho. That is, given observables 0A1,A2,...,Am10 \le A_1, A_2, ..., A_m \le 1 such that tr(ρAi)1/2\mathrm{tr}(\rho A_i) \ge 1/2 for at least one ii, the algorithm finds jj with tr(ρAj)1/2ϵ\mathrm{tr}(\rho A_j) \ge 1/2-\epsilon. As a consequence, we obtain a Shadow Tomography algorithm requiring only O~((log2m)(logd)/ϵ4)\tilde{O}((\log^2 m)(\log d)/\epsilon^4) samples, which simultaneously achieves the best known dependence on each parameter mm, dd, ϵ\epsilon. This yields the same sample complexity for quantum Hypothesis Selection among mm states; we also give an alternative Hypothesis Selection method using O~((log3m)/ϵ2)\tilde{O}((\log^3 m)/\epsilon^2) samples

    Learning Quantum States Without Entangled Measurements

    Get PDF
    How many samples of a quantum state are required to learn a complete description of it? As we will see in this thesis, the fine-grained answer depends on the measurements available to the learner, but in general it is at least Ω(d^2/ϵ^2) where d is the dimension of the state and ϵ the trace distance accuracy. Optimal algorithms for this task -- known as quantum state tomography -- make use of powerful, yet highly impractical entangled measurements, where some joint measurement is performed on all copies of the state. What can be accomplished without such measurements, where one must perform measurements on individual copies of the states? In Chapter 2 we show a relationship between the recently proposed quantum online learning framework and quantum state tomography. Specifically, we show that tomography can be accomplished using online learning algorithms in a black-box manner and O(d^4/ϵ^4) two-outcome measurements on separate copies of the state. The interpretation of this approach is that the experimentalist uses informative measurements to teach the learner by helping it make "mistakes" on measurements as early as possible. We move on to proving lower bounds on tomography in Chapter 3. First, we review a known lower bound for entangled measurements as well as a Ω(d^3/ϵ^2) lower bound in the setting where non-entangled measurements are made non-adaptively, both due to Ref.[18]. We then derive a novel bound of Ω(d^4/ϵ^2) samples when the learner is further restricted to observing a constant number of outcomes (e.g., two-outcome measurements). This implies that the folklore "Pauli tomography" algorithm is optimal in this setting. Understanding the power of adaptive measurements, where measurement choices can depend on previous outcomes, is currently an open problem. In Chapter 4 we present two scenarios in which adapting on previous outcomes makes no difference to the number of samples required. In the first, the learner is limited to adapting on at most o(d^2/ϵ^2) of the previous outcomes. In the second, measurements are drawn from some set of at most exp(O(d)) measurements. In particular, this second lower bound implies that adaptivity makes no difference in the regime of efficiently implementable measurements, in the context of quantum computing. Finally, we apply the above technique to the problems of classical shadows and shadow tomography to obtain similar lower bounds. Here, one is interested only in determining the expectations of some fixed set of observables. We once again find that, for the worst-case input of observables, adaptivity makes no difference to the sample complexity when considering efficient, non-entangled measurements. As a corollary, we find a straightforward algorithm for shadow tomography is optimal in this setting
    corecore