LIPIcs - Leibniz International Proceedings in Informatics. 16th Conference on the Theory of Quantum Computation, Communication and Cryptography (TQC 2021)
Motivated by estimation of quantum noise models, we study the problem of learning a Pauli channel, or more generally the Pauli error rates of an arbitrary channel. By employing a novel reduction to the "Population Recovery" problem, we give an extremely simple algorithm that learns the Pauli error rates of an n-qubit channel to precision ϵ in ℓ∞ using just O(1/ϵ2)log(n/ϵ) applications of the channel. This is optimal up to the logarithmic factors. Our algorithm uses only unentangled state preparation and measurements, and the post-measurement classical runtime is just an O(1/ϵ) factor larger than the measurement data size. It is also impervious to a limited model of measurement noise where heralded measurement failures occur independently with probability ≤1/4.
We then consider the case where the noise channel is close to the identity, meaning that the no-error outcome occurs with probability 1−η. In the regime of small η we extend our algorithm to achieve multiplicative precision 1±ϵ (i.e., additive precision ϵη) using just O(ϵ2η1)log(n/ϵ) applications of the channel