84,546 research outputs found
Neural Networks Architecture Evaluation in a Quantum Computer
In this work, we propose a quantum algorithm to evaluate neural networks
architectures named Quantum Neural Network Architecture Evaluation (QNNAE). The
proposed algorithm is based on a quantum associative memory and the learning
algorithm for artificial neural networks. Unlike conventional algorithms for
evaluating neural network architectures, QNNAE does not depend on
initialization of weights. The proposed algorithm has a binary output and
results in 0 with probability proportional to the performance of the network.
And its computational cost is equal to the computational cost to train a neural
network
Continuous-variable quantum neural networks
We introduce a general method for building neural networks on quantum
computers. The quantum neural network is a variational quantum circuit built in
the continuous-variable (CV) architecture, which encodes quantum information in
continuous degrees of freedom such as the amplitudes of the electromagnetic
field. This circuit contains a layered structure of continuously parameterized
gates which is universal for CV quantum computation. Affine transformations and
nonlinear activation functions, two key elements in neural networks, are
enacted in the quantum network using Gaussian and non-Gaussian gates,
respectively. The non-Gaussian gates provide both the nonlinearity and the
universality of the model. Due to the structure of the CV model, the CV quantum
neural network can encode highly nonlinear transformations while remaining
completely unitary. We show how a classical network can be embedded into the
quantum formalism and propose quantum versions of various specialized model
such as convolutional, recurrent, and residual networks. Finally, we present
numerous modeling experiments built with the Strawberry Fields software
library. These experiments, including a classifier for fraud detection, a
network which generates Tetris images, and a hybrid classical-quantum
autoencoder, demonstrate the capability and adaptability of CV quantum neural
networks
Information Scrambling in Quantum Neural Networks
The quantum neural network is one of the promising applications for near-term noisy intermediate-scale quantum computers. A quantum neural network distills the information from the input wave function into the output qubits. In this Letter, we show that this process can also be viewed from the opposite direction: the quantum information in the output qubits is scrambled into the input. This observation motivates us to use the tripartite information—a quantity recently developed to characterize information scrambling—to diagnose the training dynamics of quantum neural networks. We empirically find strong correlation between the dynamical behavior of the tripartite information and the loss function in the training process, from which we identify that the training process has two stages for randomly initialized networks. In the early stage, the network performance improves rapidly and the tripartite information increases linearly with a universal slope, meaning that the neural network becomes less scrambled than the random unitary. In the latter stage, the network performance improves slowly while the tripartite information decreases. We present evidences that the network constructs local correlations in the early stage and learns large-scale structures in the latter stage. We believe this two-stage training dynamics is universal and is applicable to a wide range of problems. Our work builds bridges between two research subjects of quantum neural networks and information scrambling, which opens up a new perspective to understand quantum neural networks
Quantum Hopfield neural network
Quantum computing allows for the potential of significant advancements in
both the speed and the capacity of widely used machine learning techniques.
Here we employ quantum algorithms for the Hopfield network, which can be used
for pattern recognition, reconstruction, and optimization as a realization of a
content-addressable memory system. We show that an exponentially large network
can be stored in a polynomial number of quantum bits by encoding the network
into the amplitudes of quantum states. By introducing a classical technique for
operating the Hopfield network, we can leverage quantum algorithms to obtain a
quantum computational complexity that is logarithmic in the dimension of the
data. We also present an application of our method as a genetic sequence
recognizer.Comment: 13 pages, 3 figures, final versio
Neural Network Operations and Susuki-Trotter evolution of Neural Network States
It was recently proposed to leverage the representational power of artificial
neural networks, in particular Restricted Boltzmann Machines, in order to model
complex quantum states of many-body systems [Science, 355(6325), 2017]. States
represented in this way, called Neural Network States (NNSs), were shown to
display interesting properties like the ability to efficiently capture
long-range quantum correlations. However, identifying an optimal neural network
representation of a given state might be challenging, and so far this problem
has been addressed with stochastic optimization techniques. In this work we
explore a different direction. We study how the action of elementary quantum
operations modifies NNSs. We parametrize a family of many body quantum
operations that can be directly applied to states represented by Unrestricted
Boltzmann Machines, by just adding hidden nodes and updating the network
parameters. We show that this parametrization contains a set of universal
quantum gates, from which it follows that the state prepared by any quantum
circuit can be expressed as a Neural Network State with a number of hidden
nodes that grows linearly with the number of elementary operations in the
circuit. This is a powerful representation theorem (which was recently obtained
with different methods) but that is not directly useful, since there is no
general and efficient way to extract information from this unrestricted
description of quantum states. To circumvent this problem, we propose a
step-wise procedure based on the projection of Unrestricted quantum states to
Restricted quantum states. In turn, two approximate methods to perform this
projection are discussed. In this way, we show that it is in principle possible
to approximately optimize or evolve Neural Network States without relying on
stochastic methods such as Variational Monte Carlo, which are computationally
expensive
Weightless neural network parameters and architecture selection in a quantum computer
Training artificial neural networks requires a tedious empirical evaluation
to determine a suitable neural network architecture. To avoid this empirical
process several techniques have been proposed to automatise the architecture
selection process. In this paper, we propose a method to perform parameter and
architecture selection for a quantum weightless neural network (qWNN). The
architecture selection is performed through the learning procedure of a qWNN
with a learning algorithm that uses the principle of quantum superposition and
a non-linear quantum operator. The main advantage of the proposed method is
that it performs a global search in the space of qWNN architecture and
parameters rather than a local search
- …
