8 research outputs found

    Quantifying the Expressive Capacity of Quantum Systems: Fundamental Limits and Eigentasks

    Full text link
    The expressive capacity of quantum systems for machine learning is limited by quantum sampling noise incurred during measurement. Although it is generally believed that noise limits the resolvable capacity of quantum systems, the precise impact of noise on learning is not yet fully understood. We present a mathematical framework for evaluating the available expressive capacity of general quantum systems from a finite number of measurements, and provide a methodology for extracting the extrema of this capacity, its eigentasks. Eigentasks are a native set of functions that a given quantum system can approximate with minimal error. We show that extracting low-noise eigentasks leads to improved performance for machine learning tasks such as classification, displaying robustness to overfitting. We obtain a tight bound on the expressive capacity, and present analyses suggesting that correlations in the measured quantum system enhance learning capacity by reducing noise in eigentasks. These results are supported by experiments on superconducting quantum processors. Our findings have broad implications for quantum machine learning and sensing applications.Comment: 7 + 21 pages, 4 + 12 figures, 1 tabl

    Reservoir Computing and Quantum Systems

    No full text
    The current era of quantum computing is characterized by scale and noise: we are able to engineer quantum systems with an unprecedented number of degrees of freedom which can be controlled and read-out. However, their application to practical computation is hindered by unavoidable noise in these platforms, our inability to correct for the resulting errors in the current computational paradigm, and limited precision with which quantum states can be measured. In this thesis we explore the intersection of engineered quantum systems and reservoir computing. Reservoir computing is a machine learning framework which uses a physical dynamical system to perform computation, in a manner agnostic to noise or errors and without detailed optimization. We will show, both through theoretical analysis and experiments on cloud quantum computers, that operating current quantum platforms as reservoir computers is a powerful computational paradigm which offers solutions to the above issues, while avoiding training difficulties typically associated with quantum machine learning.We study both gate-based and continuously-evolving qubit networks as reservoir computers, with an emphasis on their performance in the presence of noise and limited measurement resources. We develop an intuitive analysis which allows for the construction of measured observables that are maximally robust to this noise, optimizing the performance of a given quantum reservoir computer. We naturally obtain a metric, the expressible capacity, which quantifies how much information can be extracted from a quantum system in practice with limited measurement shots. This encompasses the input, algorithm, physical device, and measurement — ideal for a full-stack analysis of current quantum computers, and the critical unsolved problem of informing ansatz design in quantum machine learning. Finally, instead of quantum systems as reservoirs, we consider the application of reservoir computing to the problem of quantum measurement. We show that a small oscillator network sharing the same chip as a quantum computer can process quantum measurement signals with greater accuracy, lower latency, and less calibration overhead than conventional approaches. This reservoir computer co-processor is naturally realizable using components already present in the measurement chain of superconducting circuits, and adaptable to practical tasks such as parity monitoring and tomography

    Reservoir Computing Approach to Quantum State Measurement

    Full text link
    Efficient quantum state measurement is important for maximizing the extracted information from a quantum system. For multi-qubit quantum processors in particular, the development of a scalable architecture for rapid and high-fidelity readout remains a critical unresolved problem. Here we propose reservoir computing as a resource-efficient solution to quantum measurement of superconducting multi-qubit systems. We consider a small network of Josephson parametric oscillators, which can be implemented with minimal device overhead and in the same platform as the measured quantum system. We theoretically analyze the operation of this Kerr network as a reservoir computer to classify stochastic time-dependent signals subject to quantum statistical features. We apply this reservoir computer to the task of multinomial classification of measurement trajectories from joint multi-qubit readout. For a two-qubit dispersive measurement under realistic conditions we demonstrate a classification fidelity reliably exceeding that of an optimal linear filter using only two to five reservoir nodes, while simultaneously requiring far less calibration data \textendash{} as little as a single measurement per state. We understand this remarkable performance through an analysis of the network dynamics and develop an intuitive picture of reservoir processing generally. Finally, we demonstrate how to operate this device to perform two-qubit state tomography and continuous parity monitoring with equal effectiveness and ease of calibration. This reservoir processor avoids computationally intensive training common to other deep learning frameworks and can be implemented as an integrated cryogenic superconducting device for low-latency processing of quantum signals on the computational edge.Comment: 17 pages, 9 figures, and 57 reference
    corecore