3,527 research outputs found

    Configured Quantum Reservoir Computing for Multi-Task Machine Learning

    Full text link
    Amidst the rapid advancements in experimental technology, noise-intermediate-scale quantum (NISQ) devices have become increasingly programmable, offering versatile opportunities to leverage quantum computational advantage. Here we explore the intricate dynamics of programmable NISQ devices for quantum reservoir computing. Using a genetic algorithm to configure the quantum reservoir dynamics, we systematically enhance the learning performance. Remarkably, a single configured quantum reservoir can simultaneously learn multiple tasks, including a synthetic oscillatory network of transcriptional regulators, chaotic motifs in gene regulatory networks, and the fractional-order Chua's circuit. Our configured quantum reservoir computing yields highly precise predictions for these learning tasks, outperforming classical reservoir computing. We also test the configured quantum reservoir computing in foreign exchange (FX) market applications and demonstrate its capability to capture the stochastic evolution of the exchange rates with significantly greater accuracy than classical reservoir computing approaches. Through comparison with classical reservoir computing, we highlight the unique role of quantum coherence in the quantum reservoir, which underpins its exceptional learning performance. Our findings suggest the exciting potential of configured quantum reservoir computing for exploiting the quantum computation power of NISQ devices in developing artificial general intelligence

    Reduced-order modeling of two-dimensional turbulent Rayleigh-B\'enard flow by hybrid quantum-classical reservoir computing

    Full text link
    Two hybrid quantum-classical reservoir computing models are presented to reproduce low-order statistical properties of a two-dimensional turbulent Rayleigh-B\'enard convection flow at a Rayleigh number Ra=1e5 and a Prandtl number Pr=10. Both quantum algorithms differ by the arrangement of the circuit layers in the quantum reservoir, in particular the entanglement layers. The second of the two architectures, denoted as H2, enables a complete execution of the reservoir update inside the quantum circuit. Their performance is compared with that of a classical reservoir computing model. All three models have to learn the nonlinear and chaotic dynamics of the flow in a lower-dimensional latent data space which is spanned by the time series of the 16 most energetic Proper Orthogonal Decomposition (POD) modes. These training data are generated by a POD snapshot analysis from the turbulent flow. All reservoir computing models are operated in the reconstruction or open-loop mode, i.e., they receive 3 POD modes as an input at each step and reconstruct the missing 13 ones. We analyse the reconstruction error in dependence on the hyperparameters which are specific for the quantum cases or shared with the classical counterpart, such as the reservoir size and the leaking rate. We show that both quantum algorithms are able to reconstruct essential statistical properties of the turbulent convection flow successfully with a small number of qubits of n<=9. These properties comprise the velocity and temperature fluctuation profiles and, in particular, the turbulent convective heat flux, which quantifies the turbulent heat transfer across the layer and manifests in coherent hot rising and cold falling thermal plumes.Comment: 11 pages, 7 figure

    Hierarchical Composition of Memristive Networks for Real-Time Computing

    Get PDF
    Advances in materials science have led to physical instantiations of self-assembled networks of memristive devices and demonstrations of their computational capability through reservoir computing. Reservoir computing is an approach that takes advantage of collective system dynamics for real-time computing. A dynamical system, called a reservoir, is excited with a time-varying signal and observations of its states are used to reconstruct a desired output signal. However, such a monolithic assembly limits the computational power due to signal interdependency and the resulting correlated readouts. Here, we introduce an approach that hierarchically composes a set of interconnected memristive networks into a larger reservoir. We use signal amplification and restoration to reduce reservoir state correlation, which improves the feature extraction from the input signals. Using the same number of output signals, such a hierarchical composition of heterogeneous small networks outperforms monolithic memristive networks by at least 20% on waveform generation tasks. On the NARMA-10 task, we reduce the error by up to a factor of 2 compared to homogeneous reservoirs with sigmoidal neurons, whereas single memristive networks are unable to produce the correct result. Hierarchical composition is key for solving more complex tasks with such novel nano-scale hardware

    Analog readout for optical reservoir computers

    Full text link
    Reservoir computing is a new, powerful and flexible machine learning technique that is easily implemented in hardware. Recently, by using a time-multiplexed architecture, hardware reservoir computers have reached performance comparable to digital implementations. Operating speeds allowing for real time information operation have been reached using optoelectronic systems. At present the main performance bottleneck is the readout layer which uses slow, digital postprocessing. We have designed an analog readout suitable for time-multiplexed optoelectronic reservoir computers, capable of working in real time. The readout has been built and tested experimentally on a standard benchmark task. Its performance is better than non-reservoir methods, with ample room for further improvement. The present work thereby overcomes one of the major limitations for the future development of hardware reservoir computers.Comment: to appear in NIPS 201
    corecore