2,994 research outputs found

    Can biological quantum networks solve NP-hard problems?

    Full text link
    There is a widespread view that the human brain is so complex that it cannot be efficiently simulated by universal Turing machines. During the last decades the question has therefore been raised whether we need to consider quantum effects to explain the imagined cognitive power of a conscious mind. This paper presents a personal view of several fields of philosophy and computational neurobiology in an attempt to suggest a realistic picture of how the brain might work as a basis for perception, consciousness and cognition. The purpose is to be able to identify and evaluate instances where quantum effects might play a significant role in cognitive processes. Not surprisingly, the conclusion is that quantum-enhanced cognition and intelligence are very unlikely to be found in biological brains. Quantum effects may certainly influence the functionality of various components and signalling pathways at the molecular level in the brain network, like ion ports, synapses, sensors, and enzymes. This might evidently influence the functionality of some nodes and perhaps even the overall intelligence of the brain network, but hardly give it any dramatically enhanced functionality. So, the conclusion is that biological quantum networks can only approximately solve small instances of NP-hard problems. On the other hand, artificial intelligence and machine learning implemented in complex dynamical systems based on genuine quantum networks can certainly be expected to show enhanced performance and quantum advantage compared with classical networks. Nevertheless, even quantum networks can only be expected to efficiently solve NP-hard problems approximately. In the end it is a question of precision - Nature is approximate.Comment: 38 page

    Neural-network solutions to stochastic reaction networks

    Full text link
    The stochastic reaction network is widely used to model stochastic processes in physics, chemistry and biology. However, the size of the state space increases exponentially with the number of species, making it challenging to investigate the time evolution of the chemical master equation for the reaction network. Here, we propose a machine-learning approach using the variational autoregressive network to solve the chemical master equation. The approach is based on the reinforcement learning framework and does not require any data simulated in prior by another method. Different from simulating single trajectories, the proposed approach tracks the time evolution of the joint probability distribution in the state space of species counts, and supports direct sampling on configurations and computing their normalized joint probabilities. We apply the approach to various systems in physics and biology, and demonstrate that it accurately generates the probability distribution over time in the genetic toggle switch, the early life self-replicator, the epidemic model and the intracellular signaling cascade. The variational autoregressive network exhibits a plasticity in representing the multi-modal distribution by feedback regulations, cooperates with the conservation law, enables time-dependent reaction rates, and is efficient for high-dimensional reaction networks with allowing a flexible upper count limit. The results suggest a general approach to investigate stochastic reaction networks based on modern machine learning

    Adiabatic evolution on a spatial-photonic Ising machine

    Get PDF
    Combinatorial optimization problems are crucial for widespread applications but remain difficult to solve on a large scale with conventional hardware. Novel optical platforms, known as coherent or photonic Ising machines, are attracting considerable attention as accelerators on optimization tasks formulable as Ising models. Annealing is a well-known technique based on adiabatic evolution for finding optimal solutions in classical and quantum systems made by atoms, electrons, or photons. Although various Ising machines employ annealing in some form, adiabatic computing on optical settings has been only partially investigated. Here, we realize the adiabatic evolution of frustrated Ising models with 100 spins programmed by spatial light modulation. We use holographic and optical control to change the spin couplings adiabatically, and exploit experimental noise to explore the energy landscape. Annealing enhances the convergence to the Ising ground state and allows to find the problem solution with probability close to unity. Our results demonstrate a photonic scheme for combinatorial optimization in analogy with adiabatic quantum algorithms and enforced by optical vector-matrix multiplications and scalable photonic technology.Comment: 9 pages, 4 figure

    Superior memory efficiency of quantum devices for the simulation of continuous-time stochastic processes

    Full text link
    Continuous-time stochastic processes pervade everyday experience, and the simulation of models of these processes is of great utility. Classical models of systems operating in continuous-time must typically track an unbounded amount of information about past behaviour, even for relatively simple models, enforcing limits on precision due to the finite memory of the machine. However, quantum machines can require less information about the past than even their optimal classical counterparts to simulate the future of discrete-time processes, and we demonstrate that this advantage extends to the continuous-time regime. Moreover, we show that this reduction in the memory requirement can be unboundedly large, allowing for arbitrary precision even with a finite quantum memory. We provide a systematic method for finding superior quantum constructions, and a protocol for analogue simulation of continuous-time renewal processes with a quantum machine.Comment: 13 pages, 8 figures, title changed from original versio

    Density Matrix Emulation of Quantum Recurrent Neural Networks for Multivariate Time Series Prediction

    Full text link
    Quantum Recurrent Neural Networks (QRNNs) are robust candidates to model and predict future values in multivariate time series. However, the effective implementation of some QRNN models is limited by the need of mid-circuit measurements. Those increase the requirements for quantum hardware, which in the current NISQ era does not allow reliable computations. Emulation arises as the main near-term alternative to explore the potential of QRNNs, but existing quantum emulators are not dedicated to circuits with multiple intermediate measurements. In this context, we design a specific emulation method that relies on density matrix formalism. The mathematical development is explicitly provided as a compact formulation by using tensor notation. It allows us to show how the present and past information from a time series is transmitted through the circuit, and how to reduce the computational cost in every time step of the emulated network. In addition, we derive the analytical gradient and the Hessian of the network outputs with respect to its trainable parameters, with an eye on gradient-based training and noisy outputs that would appear when using real quantum processors. We finally test the presented methods using a novel hardware-efficient ansatz and three diverse datasets that include univariate and multivariate time series. Our results show how QRNNs can make accurate predictions of future values by capturing non-trivial patterns of input series with different complexities.Comment: 16 pages, 6 figure
    corecore