14 research outputs found
Quantum information in the real world: Diagnosing and correcting errors in practical quantum devices
Quantum computers promise to be a revolutionary new technology. However, in order to realise this promise many hurdles must first be overcome. In this thesis we investigate two such hurdles: the presence of noise in quantum computers and limitations on the connectivity and control in large scale quantum computing architectures.In order to combat noise in quantum devices we must first characterize this noise. To do this several diagnostic tools have been developed over the last two decades. The current industry standard for such a diagnostic tool is called randomized benchmarking. Randomized benchmarking doesn't give a full characterization of the noise afflicting the quantum device but rather attempts to give some indication of of the device's average behavior, captured in a quantity called the average fidelity. Because it does not endeavor to characterize every small detail of the noise it can be efficiently applied even to very large quantum devices. However, with this power also comes increased complexity. Randomized benchmarking has a lot of moving parts, and some fairly strong assumptions must be made in order to guarantee its correctness. In this thesis we attempt to justify these assumptions and if possible remove or weaken them, making randomized benchmarking a more robust and general tool. In chapter 6 of this thesis we investigate the finite statistics of randomized benchmarking. We prove strong bounds on the number of samples needed to perform rigorous randomized benchmarking. To do this we make use tools from representation theory. In particular we use a characterization of certain representations of the Clifford group, which we develop in chapter 5. In chapter 7 we re-use these tools to also bound the number of samples needed to perform rigorous unitarity randomized benchmarking, a newer variant of randomized benchmarking quickly gaining in popularity. These results retroactively justify the use of randomized benchmarking in an experimental setting and also provide guidance on optimal statistical practices in the context of randomized benchmarking. In chapter 8 we expand upon the standard randomized benchmarking protocol and formulate a new class of protocols which we call character randomized benchmarking. This new class of protocols removes a critical assumption made in standard randomized benchmarking, making character randomized benchmarking vastly more generally applicable. To show the advantages of character randomized benchmarking we implement it in an experiment characterizing the noise in a Si/SiGe quantum dot device. This experiment is detailed in chapter 9. Finally we deal with the second main topic of this thesis in chapter 10. Large scale quantum computer will, like classical computers, face limitations in the connectivity between different parts of the computer. This is due to a fundamental law in computer design called Rent's rule, which states that the number of wires connecting a (quantum) computer chip to the outside world is much smaller than the number of components in that chip. This means the individual components of the chip can not be controlled individually in parallel. Given that parallelism is absolutely critical for the functioning of quantum computers this is a serious problem for the development of large scale quantum computers. Luckily it is possible to organize quantum computing devices in such a way that they can be controlled using a relatively small amount of input wires. One example of such an organization is called a crossbar architecture. Recently a proposal was made for a crossbar architecture quantum computer in quantum dots, and in chapter 10 of this thesis we investigate in detail the advantages and disadvantages of such an architecture. We focus in particular on its effect on standard quantum error correction procedures, a key part of a functioning quantum computer, and one where parallel control of all parts of the quantum device is essential.Quantum Information and Softwar
Representations of the multi-qubit Clifford group
The q-qubit Clifford group, that is, the normalizer of the q-qubit Pauli group in U(2q), is a fundamental structure in quantum information with a wide variety of applications. We characterize all irreducible subrepresentations of the two-copy representation φ⊗2 of the Clifford group on the two-fold tensor product of the space of linear operators M2q⊗2. In the companion paper [Helsen et al., e-print arXiv:1701.04299 (2017)], we apply this result to improve the statistics of randomized benchmarking, a method for characterizing quantum systems.Accepted Author ManuscriptQuantum Information and SoftwareQuTechQuantum Internet Divisio
Efficient unitarity randomized benchmarking of few-qubit Clifford gates
Unitarity randomized benchmarking (URB) is an experimental procedure for estimating the coherence of implemented quantum gates independently of state preparation and measurement errors. These estimates of the coherence are measured by the unitarity. A central problem in this experiment is relating the number of data points to rigorous confidence intervals. In this work we provide a bound on the required number of data points for Clifford URB as a function of confidence and experimental parameters. This bound has favorable scaling in the regime of near-unitary noise and is asymptotically independent of the length of the gate sequences used. We also show that, in contrast to standard randomized benchmarking, a nontrivial number of data points is always required to overcome the randomness introduced by state preparation and measurement errors even in the limit of perfect gates. Our bound is sufficiently sharp to benchmark small-dimensional systems in realistic parameter regimes using a modest number of data points. For example, we show that the unitarity of single-qubit Clifford gates can be rigorously estimated using few hundred data points under the assumption of gate-independent noise. This is a reduction of orders of magnitude compared to previously known bounds.QuTechQID/Wehner GroupQuantum Information and SoftwareQuantum Internet Divisio
Spectral estimation for Hamiltonians: A comparison between classical imaginary-time evolution and quantum real-time evolution
We consider the task of spectral estimation of local quantum Hamiltonians. The spectral estimation is performed by estimating the oscillation frequencies or decay rates of signals representing the time evolution of states. We present a classical Monte Carlo (MC) scheme which efficiently estimates an imaginary-time, decaying signal for stoquastic (i.e. sign-problem-free) local Hamiltonians. The decay rates in this signal correspond to Hamiltonian eigenvalues (with associated eigenstates present in an input state) and can be extracted using a classical signal processing method like ESPRIT. We compare the efficiency of this MC scheme to its quantum counterpart in which one extracts eigenvalues of a general local Hamiltonian from a real-time, oscillatory signal obtained through quantum phase estimation circuits, again using the ESPRIT method. We prove that the ESPRIT method can resolve S = poly(n) eigenvalues, assuming a 1/poly(n) gap between them, with poly(n) quantum and classical effort through the quantum phase estimation (QPE) circuits, assuming efficient preparation of the input state. We prove that our MC scheme plus the ESPRIT method can resolve S = O(1) eigenvalues, assuming a 1/poly(n) gap between them, with poly(n) purely classical effort for stoquastic Hamiltonians, requiring some access structure to the input state. However, we also show that under these assumptions, i.e. S = O(1) eigenvalues, assuming a 1/poly(n) gap between them and some access structure to the input state, one can achieve this with poly(n) purely classical effort for general local Hamiltonians. These results thus quantify some opportunities and limitations of MC methods for spectral estimation of Hamiltonians. We numerically compare the MC eigenvalue estimation scheme (for stoquastic Hamiltonians) and the quantum-phase-estimation-based eigenvalue estimation scheme by implementing them for an archetypal stoquastic Hamiltonian system: the transverse field Ising chain. QCD/Terhal GroupQuantum Computin
The complexity of the vertex-minor problem
A graph H is a vertex-minor of a graph G if it can be reached from G by the successive application of local complementations and vertex deletions. Vertex-minors have been the subject of intense study in graph theory over the last decades and have found applications in other fields such as quantum information theory. Therefore it is natural to consider the computational complexity of deciding whether a given graph G has a vertex-minor isomorphic to another graph H. Here we prove that this decision problem is NP-complete, even when restricting H and G to be circle graphs, a class of graphs that has a natural relation to vertex-minors.QID/Wehner GroupQuTechQuantum Information and SoftwareQuantum Internet Divisio
Spectral quantum tomography
We introduce spectral quantum tomography, a simple method to extract the eigenvalues of a noisy few-qubit gate, represented by a trace-preserving superoperator, in a SPAM-resistant fashion, using low resources in terms of gate sequence length. The eigenvalues provide detailed gate information, supplementary to known gate-quality measures such as the gate fidelity, and can be used as a gate diagnostic tool. We apply our method to one- and two-qubit gates on two different superconducting systems available in the cloud, namely the QuTech Quantum Infinity and the IBM Quantum Experience. We discuss how cross-talk, leakage and non-Markovian errors affect the eigenvalue data.Quantum Information and SoftwareQuTechQCD/Terhal GroupQuantum Computin
Counting single-qubit Clifford equivalent graph states is # P -complete
Graph states, which include Bell states, Greenberger-Horne-Zeilinger (GHZ) states, and cluster states, form a well-known class of quantum states with applications ranging from quantum networks to error-correction. Whether two graph states are equivalent up to single-qubit Clifford operations is known to be decidable in polynomial time and has been studied in the context of producing certain required states in a quantum network in relation to stabilizer codes. The reason for the latter is that single-qubit Clifford equivalent graph states exactly correspond to equivalent stabilizer codes. We here consider that the computational complexity of, given a graph state |G«, counting the number of graph states, single-qubit Clifford equivalent to |G«. We show that this problem is #P-complete. To prove our main result, we make use of the notion of isotropic systems in graph theory. We review the definition of isotropic systems and point out their strong relation to graph states. We believe that these isotropic systems can be useful beyond the results presented in this paper.QID/Wehner GroupQuTechQuantum Information and SoftwareQuantum Internet Divisio
Quantum error correction in crossbar architectures
A central challenge for the scaling of quantum computing systems is the need to control all qubits in the system without a large overhead. A solution for this problem in classical computing comes in the form of so-called crossbar architectures. Recently we made a proposal for a large-scale quantum processor (Li et al arXiv:1711.03807 (2017)) to be implemented in silicon quantum dots. This system features a crossbar control architecture which limits parallel single-qubit control, but allows the scheme to overcome control scaling issues that form a major hurdle to large-scale quantum computing systems. In this work, we develop a language that makes it possible to easily map quantum circuits to crossbar systems, taking into account their architecture and control limitations. Using this language we show how to map well known quantum error correction codes such as the planar surface and color codes in this limited control setting with only a small overhead in time. We analyze the logical error behavior of this surface code mapping for estimated experimental parameters of the crossbar system and conclude that logical error suppression to a level useful for real quantum computation is feasible.Accepted Author ManuscriptQuantum Information and SoftwareQuTechQID/Wehner GroupQCD/Veldhorst LabQuantum Internet Divisio
A new class of efficient randomized benchmarking protocols
Randomized benchmarking is a technique for estimating the average fidelity of a set of quantum gates. However, if this gateset is not the multi-qubit Clifford group, robustly extracting the average fidelity is difficult. Here, we propose a new method based on representation theory that has little experimental overhead and robustly extracts the average fidelity for a broad class of gatesets. We apply our method to a multi-qubit gateset that includes the T-gate, and propose a new interleaved benchmarking protocol that extracts the average fidelity of a two-qubit Clifford gate using only single-qubit Clifford gates as reference.Quantum Information and SoftwareQuTechQCD/Vandersypen LabQN/Vandersypen LabQuantum Internet Divisio
Multiqubit randomized benchmarking using few samples
Randomized benchmarking (RB) is an efficient and robust method to characterize gate errors in quantum circuits. Averaging over random sequences of gates leads to estimates of gate errors in terms of the average fidelity. These estimates are isolated from the state preparation and measurement errors that plague other methods such as channel tomography and direct fidelity estimation. A decisive factor in the feasibility of randomized benchmarking is the number of sampled sequences required to obtain rigorous confidence intervals. Previous bounds were either prohibitively loose or required the number of sampled sequences to scale exponentially with the number of qubits in order to obtain a fixed confidence interval at a fixed error rate. Here, we show that, with a small adaptation to the randomized benchmarking procedure, the number of sampled sequences required for a fixed confidence interval is dramatically smaller than could previously be justified. In particular, we show that the number of sampled sequences required is essentially independent of the number of qubits and scales favorably with the average error rate of the system under investigation. We also investigate the fitting procedure inherent to randomized benchmarking in light of our results and find that standard methods such as ordinary least squares optimization can give misleading results. We therefore recommend moving to more sophisticated fitting methods such as iteratively reweighted least squares optimization. Our results bring rigorous randomized benchmarking on systems with many qubits into the realm of experimental feasibility.Quantum Information and SoftwareQuTechQuantum Internet Divisio