1,702 research outputs found
Quantum autoencoders via quantum adders with genetic algorithms
The quantum autoencoder is a recent paradigm in the field of quantum machine
learning, which may enable an enhanced use of resources in quantum
technologies. To this end, quantum neural networks with less nodes in the inner
than in the outer layers were considered. Here, we propose a useful connection
between approximate quantum adders and quantum autoencoders. Specifically, this
link allows us to employ optimized approximate quantum adders, obtained with
genetic algorithms, for the implementation of quantum autoencoders for a
variety of initial states. Furthermore, we can also directly optimize the
quantum autoencoders via genetic algorithms. Our approach opens a different
path for the design of quantum autoencoders in controllable quantum platforms
Continuous-variable quantum neural networks
We introduce a general method for building neural networks on quantum
computers. The quantum neural network is a variational quantum circuit built in
the continuous-variable (CV) architecture, which encodes quantum information in
continuous degrees of freedom such as the amplitudes of the electromagnetic
field. This circuit contains a layered structure of continuously parameterized
gates which is universal for CV quantum computation. Affine transformations and
nonlinear activation functions, two key elements in neural networks, are
enacted in the quantum network using Gaussian and non-Gaussian gates,
respectively. The non-Gaussian gates provide both the nonlinearity and the
universality of the model. Due to the structure of the CV model, the CV quantum
neural network can encode highly nonlinear transformations while remaining
completely unitary. We show how a classical network can be embedded into the
quantum formalism and propose quantum versions of various specialized model
such as convolutional, recurrent, and residual networks. Finally, we present
numerous modeling experiments built with the Strawberry Fields software
library. These experiments, including a classifier for fraud detection, a
network which generates Tetris images, and a hybrid classical-quantum
autoencoder, demonstrate the capability and adaptability of CV quantum neural
networks
Hybrid Collaborative Filtering with Autoencoders
Collaborative Filtering aims at exploiting the feedback of users to provide
personalised recommendations. Such algorithms look for latent variables in a
large sparse matrix of ratings. They can be enhanced by adding side information
to tackle the well-known cold start problem. While Neu-ral Networks have
tremendous success in image and speech recognition, they have received less
attention in Collaborative Filtering. This is all the more surprising that
Neural Networks are able to discover latent variables in large and
heterogeneous datasets. In this paper, we introduce a Collaborative Filtering
Neural network architecture aka CFN which computes a non-linear Matrix
Factorization from sparse rating inputs and side information. We show
experimentally on the MovieLens and Douban dataset that CFN outper-forms the
state of the art and benefits from side information. We provide an
implementation of the algorithm as a reusable plugin for Torch, a popular
Neural Network framework
Graph-embedding Enhanced Attention Adversarial Autoencoder
When dealing with the graph data in real problems, only part of the nodes in the graph are labeled and the rest are not. A core problem is how to use this information to extend the labeling so that all nodes are assigned a label (or labels). Intuitively we can learn the patterns (or extract some representations) from those labeled nodes and then apply the patterns to determine the membership for those unknown nodes. A majority of previous related studies focus on extracting the local information representations and may suffer from lack of additional constraints which are necessary for improving the robustness of representation. In this work, we presented Graph- embedding enhanced attention Adversarial Autoencoder Networks (Great AAN), a new scalable generalized framework for graph-structured data representation learning and node classification. In our framework, we firstly introduce the attention layers and provide insights on the self-attention mechanism with multi-heads. Moreover, the shortest path length between nodes is incorporated into the self-attention mechanism to enhance the embedding of the node’s structural spatial information. Then a generative adversarial autoencoder is proposed to encode both global and local information and enhance the robustness of the embedded data distribution. Due to the scalability of our approach, it has efficient and various applications, including node classification, a recommendation system, and graph link prediction. We applied this Great AAN on multiple datasets (including PPI, Cora, Citeseer, Pubmed and Alipay) from social science and biomedical science. The experimental results demonstrated that our new framework significantly outperforms several popular methods
- …