35,684 research outputs found

    Information Scrambling in Quantum Neural Networks

    Get PDF
    The quantum neural network is one of the promising applications for near-term noisy intermediate-scale quantum computers. A quantum neural network distills the information from the input wave function into the output qubits. In this Letter, we show that this process can also be viewed from the opposite direction: the quantum information in the output qubits is scrambled into the input. This observation motivates us to use the tripartite information—a quantity recently developed to characterize information scrambling—to diagnose the training dynamics of quantum neural networks. We empirically find strong correlation between the dynamical behavior of the tripartite information and the loss function in the training process, from which we identify that the training process has two stages for randomly initialized networks. In the early stage, the network performance improves rapidly and the tripartite information increases linearly with a universal slope, meaning that the neural network becomes less scrambled than the random unitary. In the latter stage, the network performance improves slowly while the tripartite information decreases. We present evidences that the network constructs local correlations in the early stage and learns large-scale structures in the latter stage. We believe this two-stage training dynamics is universal and is applicable to a wide range of problems. Our work builds bridges between two research subjects of quantum neural networks and information scrambling, which opens up a new perspective to understand quantum neural networks

    Superpositional Quantum Network Topologies

    Full text link
    We introduce superposition-based quantum networks composed of (i) the classical perceptron model of multilayered, feedforward neural networks and (ii) the algebraic model of evolving reticular quantum structures as described in quantum gravity. The main feature of this model is moving from particular neural topologies to a quantum metastructure which embodies many differing topological patterns. Using quantum parallelism, training is possible on superpositions of different network topologies. As a result, not only classical transition functions, but also topology becomes a subject of training. The main feature of our model is that particular neural networks, with different topologies, are quantum states. We consider high-dimensional dissipative quantum structures as candidates for implementation of the model.Comment: 10 pages, LaTeX2

    Backpropagation training in adaptive quantum networks

    Full text link
    We introduce a robust, error-tolerant adaptive training algorithm for generalized learning paradigms in high-dimensional superposed quantum networks, or \emph{adaptive quantum networks}. The formalized procedure applies standard backpropagation training across a coherent ensemble of discrete topological configurations of individual neural networks, each of which is formally merged into appropriate linear superposition within a predefined, decoherence-free subspace. Quantum parallelism facilitates simultaneous training and revision of the system within this coherent state space, resulting in accelerated convergence to a stable network attractor under consequent iteration of the implemented backpropagation algorithm. Parallel evolution of linear superposed networks incorporating backpropagation training provides quantitative, numerical indications for optimization of both single-neuron activation functions and optimal reconfiguration of whole-network quantum structure.Comment: Talk presented at "Quantum Structures - 2008", Gdansk, Polan
    • …
    corecore