133,509 research outputs found
MORE: Measurement and Correlation Based Variational Quantum Circuit for Multi-classification
Quantum computing has shown considerable promise for compute-intensive tasks
in recent years. For instance, classification tasks based on quantum neural
networks (QNN) have garnered significant interest from researchers and have
been evaluated in various scenarios. However, the majority of quantum
classifiers are currently limited to binary classification tasks due to either
constrained quantum computing resources or the need for intensive classical
post-processing. In this paper, we propose an efficient quantum
multi-classifier called MORE, which stands for measurement and correlation
based variational quantum multi-classifier. MORE adopts the same variational
ansatz as binary classifiers while performing multi-classification by fully
utilizing the quantum information of a single readout qubit. To extract the
complete information from the readout qubit, we select three observables that
form the basis of a two-dimensional Hilbert space. We then use the quantum
state tomography technique to reconstruct the readout state from the
measurement results. Afterward, we explore the correlation between classes to
determine the quantum labels for classes using the variational quantum
clustering approach. Next, quantum label-based supervised learning is performed
to identify the mapping between the input data and their corresponding quantum
labels. Finally, the predicted label is determined by its closest quantum label
when using the classifier. We implement this approach using the Qiskit Python
library and evaluate it through extensive experiments on both noise-free and
noisy quantum systems. Our evaluation results demonstrate that MORE, despite
using a simple ansatz and limited quantum resources, achieves advanced
performance.Comment: IEEE International Conference on Quantum Computing and Engineering
(QCE23
Learning and Interpreting Multi-Multi-Instance Learning Networks
We introduce an extension of the multi-instance learning problem where
examples are organized as nested bags of instances (e.g., a document could be
represented as a bag of sentences, which in turn are bags of words). This
framework can be useful in various scenarios, such as text and image
classification, but also supervised learning over graphs. As a further
advantage, multi-multi instance learning enables a particular way of
interpreting predictions and the decision function. Our approach is based on a
special neural network layer, called bag-layer, whose units aggregate bags of
inputs of arbitrary size. We prove theoretically that the associated class of
functions contains all Boolean functions over sets of sets of instances and we
provide empirical evidence that functions of this kind can be actually learned
on semi-synthetic datasets. We finally present experiments on text
classification, on citation graphs, and social graph data, which show that our
model obtains competitive results with respect to accuracy when compared to
other approaches such as convolutional networks on graphs, while at the same
time it supports a general approach to interpret the learnt model, as well as
explain individual predictions.Comment: JML
Multi-Label Learning with Label Enhancement
The task of multi-label learning is to predict a set of relevant labels for
the unseen instance. Traditional multi-label learning algorithms treat each
class label as a logical indicator of whether the corresponding label is
relevant or irrelevant to the instance, i.e., +1 represents relevant to the
instance and -1 represents irrelevant to the instance. Such label represented
by -1 or +1 is called logical label. Logical label cannot reflect different
label importance. However, for real-world multi-label learning problems, the
importance of each possible label is generally different. For the real
applications, it is difficult to obtain the label importance information
directly. Thus we need a method to reconstruct the essential label importance
from the logical multilabel data. To solve this problem, we assume that each
multi-label instance is described by a vector of latent real-valued labels,
which can reflect the importance of the corresponding labels. Such label is
called numerical label. The process of reconstructing the numerical labels from
the logical multi-label data via utilizing the logical label information and
the topological structure in the feature space is called Label Enhancement. In
this paper, we propose a novel multi-label learning framework called LEMLL,
i.e., Label Enhanced Multi-Label Learning, which incorporates regression of the
numerical labels and label enhancement into a unified framework. Extensive
comparative studies validate that the performance of multi-label learning can
be improved significantly with label enhancement and LEMLL can effectively
reconstruct latent label importance information from logical multi-label data.Comment: ICDM 201
- …