478 research outputs found

    Basic protocols in quantum reinforcement learning with superconducting circuits

    Get PDF
    Superconducting circuit technologies have recently achieved quantum protocols involving closed feedback loops. Quantum artificial intelligence and quantum machine learning are emerging fields inside quantum technologies which may enable quantum devices to acquire information from the outer world and improve themselves via a learning process. Here we propose the implementation of basic protocols in quantum reinforcement learning, with superconducting circuits employing feedback-loop control. We introduce diverse scenarios for proof-of-principle experiments with state-of-the-art superconducting circuit technologies and analyze their feasibility in presence of imperfections. The field of quantum artificial intelligence implemented with superconducting circuits paves the way for enhanced quantum control and quantum computation protocols.Comment: Published versio

    Integrated Information in Discrete Dynamical Systems: Motivation and Theoretical Framework

    Get PDF
    This paper introduces a time- and state-dependent measure of integrated information, φ, which captures the repertoire of causal states available to a system as a whole. Specifically, φ quantifies how much information is generated (uncertainty is reduced) when a system enters a particular state through causal interactions among its elements, above and beyond the information generated independently by its parts. Such mathematical characterization is motivated by the observation that integrated information captures two key phenomenological properties of consciousness: (i) there is a large repertoire of conscious experiences so that, when one particular experience occurs, it generates a large amount of information by ruling out all the others; and (ii) this information is integrated, in that each experience appears as a whole that cannot be decomposed into independent parts. This paper extends previous work on stationary systems and applies integrated information to discrete networks as a function of their dynamics and causal architecture. An analysis of basic examples indicates the following: (i) φ varies depending on the state entered by a network, being higher if active and inactive elements are balanced and lower if the network is inactive or hyperactive. (ii) φ varies for systems with identical or similar surface dynamics depending on the underlying causal architecture, being low for systems that merely copy or replay activity states. (iii) φ varies as a function of network architecture. High φ values can be obtained by architectures that conjoin functional specialization with functional integration. Strictly modular and homogeneous systems cannot generate high φ because the former lack integration, whereas the latter lack information. Feedforward and lattice architectures are capable of generating high φ but are inefficient. (iv) In Hopfield networks, φ is low for attractor states and neutral states, but increases if the networks are optimized to achieve tension between local and global interactions. These basic examples appear to match well against neurobiological evidence concerning the neural substrates of consciousness. More generally, φ appears to be a useful metric to characterize the capacity of any physical system to integrate information

    Size, Depth and Energy of Threshold Circuits Computing Parity Function

    Get PDF
    We investigate relations among the size, depth and energy of threshold circuits computing the n-variable parity function PAR_n, where the energy is a complexity measure for sparsity on computation of threshold circuits, and is defined to be the maximum number of gates outputting "1" over all the input assignments. We show that PAR_n is hard for threshold circuits of small size, depth and energy: - If a depth-2 threshold circuit C of size s and energy e computes PAR_n, it holds that 2^{n/(elog ^e n)} ? s; and - if a threshold circuit C of size s, depth d and energy e computes PAR_n, it holds that 2^{n/(e2^{e+d}log ^e n)} ? s. We then provide several upper bounds: - PAR_n is computable by a depth-2 threshold circuit of size O(2^{n-2e}) and energy e; - PAR_n is computable by a depth-3 threshold circuit of size O(2^{n/(e-1)} + 2^{e-2}) and energy e; and - PAR_n is computable by a threshold circuit of size O((e+d)2^{n-m}), depth d + O(1) and energy e + O(1), where m = max (((e-1)/(d-1))^{d-1}, ((d-1)/(e-1))^{e-1}). Our lower and upper bounds imply that threshold circuits need exponential size if both depth and energy are constant, which contrasts with the fact that PAR_n is computable by a threshold circuit of size O(n) and depth 2 if there is no restriction on the energy. Our results also suggest that any threshold circuit computing the parity function needs depth to be sparse if its size is bounded

    Neural network decoder for topological color codes with circuit level noise

    Full text link
    A quantum computer needs the assistance of a classical algorithm to detect and identify errors that affect encoded quantum information. At this interface of classical and quantum computing the technique of machine learning has appeared as a way to tailor such an algorithm to the specific error processes of an experiment --- without the need for a priori knowledge of the error model. Here, we apply this technique to topological color codes. We demonstrate that a recurrent neural network with long short-term memory cells can be trained to reduce the error rate ϵL\epsilon_{\rm L} of the encoded logical qubit to values much below the error rate ϵphys\epsilon_{\rm phys} of the physical qubits --- fitting the expected power law scaling ϵLϵphys(d+1)/2\epsilon_{\rm L} \propto \epsilon_{\rm phys}^{(d+1)/2}, with dd the code distance. The neural network incorporates the information from "flag qubits" to avoid reduction in the effective code distance caused by the circuit. As a test, we apply the neural network decoder to a density-matrix based simulation of a superconducting quantum computer, demonstrating that the logical qubit has a longer life-time than the constituting physical qubits with near-term experimental parameters.Comment: 10 pages, 9 figures; V2: updated text and figure

    Center for Aeronautics and Space Information Sciences

    Get PDF
    This report summarizes the research done during 1991/92 under the Center for Aeronautics and Space Information Science (CASIS) program. The topics covered are computer architecture, networking, and neural nets

    Single-Input Signature Register-Based Time Delay Reservoir

    Get PDF
    Machine learning continues to play a critical role in our society. The ability to automatically identify intricate relationships in large volumes of data has proven incredibly useful for problems such as automatic speech recognition and image processing. In particular, neural networks have become increasingly popular in a wide set of application domains, given their ability to solve complex problems and process high-dimensional data. However, the impressive performance of state-of-the-art neural networks comes at the cost of large area and power consumption for the computation resources used in training and inference. As a result, a growing area of research concerns hardware implementations of neural networks. This work proposes a hardware-friendly design for a time-delay reservoir (TDR), a type of recurrent neural network. TDRs represent one class of reservoir computing neural network topologies, which employ random spatio-temporal feature extraction from time series data in order to produce a linearly separable set of features. Reservoir computing topologies differ from traditional recurrent neural networks because their recurrent weights are fixed, and the only the feedforward output weights need to be trained, usually with linear regression. Previous work on TDRs includes photonic implementation, software implementation, and both digital and analog electronic implementations. This work adds to the body of previous research by exploring the design space of a novel TDR based on single-input signature registers (SISRs), which are common digital circuits used for built-in self-test. The work is motivated by the structural similarity (delayed feedback loop) between TDRs and SISRs, and the possibility of dual-purpose of SISRs for conventional testing as well as machine learning within a single chip. The proposed designs can perform classification on multivariate datasets and perform better than a traditional TDR with quantized reservoir states for parity check, MNIST classification, and temperature prediction tasks. Classification accuracies of up to 100% were observed for some configurations of the SISR for the parity check task and accuracies of up to 85% were observed for MNIST classification. We also observe overfitting on a temperature prediction task with longer data sequences and provide analyses of the results based on the reservoir dynamics, as measured by the rate of divergence between SISR states and the SISR period
    corecore