29,292 research outputs found
Quantum Hopfield neural network
Quantum computing allows for the potential of significant advancements in
both the speed and the capacity of widely used machine learning techniques.
Here we employ quantum algorithms for the Hopfield network, which can be used
for pattern recognition, reconstruction, and optimization as a realization of a
content-addressable memory system. We show that an exponentially large network
can be stored in a polynomial number of quantum bits by encoding the network
into the amplitudes of quantum states. By introducing a classical technique for
operating the Hopfield network, we can leverage quantum algorithms to obtain a
quantum computational complexity that is logarithmic in the dimension of the
data. We also present an application of our method as a genetic sequence
recognizer.Comment: 13 pages, 3 figures, final versio
Neural Ranking Models with Weak Supervision
Despite the impressive improvements achieved by unsupervised deep neural
networks in computer vision and NLP tasks, such improvements have not yet been
observed in ranking for information retrieval. The reason may be the complexity
of the ranking problem, as it is not obvious how to learn from queries and
documents when no supervised signal is available. Hence, in this paper, we
propose to train a neural ranking model using weak supervision, where labels
are obtained automatically without human annotators or any external resources
(e.g., click data). To this aim, we use the output of an unsupervised ranking
model, such as BM25, as a weak supervision signal. We further train a set of
simple yet effective ranking models based on feed-forward neural networks. We
study their effectiveness under various learning scenarios (point-wise and
pair-wise models) and using different input representations (i.e., from
encoding query-document pairs into dense/sparse vectors to using word embedding
representation). We train our networks using tens of millions of training
instances and evaluate it on two standard collections: a homogeneous news
collection(Robust) and a heterogeneous large-scale web collection (ClueWeb).
Our experiments indicate that employing proper objective functions and letting
the networks to learn the input representation based on weakly supervised data
leads to impressive performance, with over 13% and 35% MAP improvements over
the BM25 model on the Robust and the ClueWeb collections. Our findings also
suggest that supervised neural ranking models can greatly benefit from
pre-training on large amounts of weakly labeled data that can be easily
obtained from unsupervised IR models.Comment: In proceedings of The 40th International ACM SIGIR Conference on
Research and Development in Information Retrieval (SIGIR2017
Neural network image reconstruction for magnetic particle imaging
We investigate neural network image reconstruction for magnetic particle
imaging. The network performance depends strongly on the convolution effects of
the spectrum input data. The larger convolution effect appearing at a
relatively smaller nanoparticle size obstructs the network training. The
trained single-layer network reveals the weighting matrix consisted of a basis
vector in the form of Chebyshev polynomials of the second kind. The weighting
matrix corresponds to an inverse system matrix, where an incoherency of basis
vectors due to a low convolution effects as well as a nonlinear activation
function plays a crucial role in retrieving the matrix elements. Test images
are well reconstructed through trained networks having an inverse kernel
matrix. We also confirm that a multi-layer network with one hidden layer
improves the performance. The architecture of a neural network overcoming the
low incoherence of the inverse kernel through the classification property will
become a better tool for image reconstruction.Comment: 9 pages, 11 figure
- …