183,598 research outputs found
On the Interplay of Subset Selection and Informed Graph Neural Networks
Machine learning techniques paired with the availability of massive datasets
dramatically enhance our ability to explore the chemical compound space by
providing fast and accurate predictions of molecular properties. However,
learning on large datasets is strongly limited by the availability of
computational resources and can be infeasible in some scenarios. Moreover, the
instances in the datasets may not yet be labelled and generating the labels can
be costly, as in the case of quantum chemistry computations. Thus, there is a
need to select small training subsets from large pools of unlabelled data
points and to develop reliable ML methods that can effectively learn from small
training sets. This work focuses on predicting the molecules atomization energy
in the QM9 dataset. We investigate the advantages of employing domain
knowledge-based data sampling methods for an efficient training set selection
combined with informed ML techniques. In particular, we show how maximizing
molecular diversity in the training set selection process increases the
robustness of linear and nonlinear regression techniques such as kernel methods
and graph neural networks. We also check the reliability of the predictions
made by the graph neural network with a model-agnostic explainer based on the
rate distortion explanation framework
CodedPrivateML: A Fast and Privacy-Preserving Framework for Distributed Machine Learning
How to train a machine learning model while keeping the data private and secure? We present CodedPrivateML, a fast and scalable approach to this critical problem.
CodedPrivateML keeps both the data and the model information-theoretically private, while allowing efficient parallelization of training across distributed workers.
We characterize CodedPrivateML\u27s privacy threshold and prove its convergence for logistic (and linear) regression. Furthermore, via experiments over Amazon EC2, we demonstrate that CodedPrivateML can provide an order of magnitude speedup (up to ) over the state-of-the-art cryptographic approaches
Quantum machine learning: a classical perspective
Recently, increased computational power and data availability, as well as
algorithmic advances, have led machine learning techniques to impressive
results in regression, classification, data-generation and reinforcement
learning tasks. Despite these successes, the proximity to the physical limits
of chip fabrication alongside the increasing size of datasets are motivating a
growing number of researchers to explore the possibility of harnessing the
power of quantum computation to speed-up classical machine learning algorithms.
Here we review the literature in quantum machine learning and discuss
perspectives for a mixed readership of classical machine learning and quantum
computation experts. Particular emphasis will be placed on clarifying the
limitations of quantum algorithms, how they compare with their best classical
counterparts and why quantum resources are expected to provide advantages for
learning problems. Learning in the presence of noise and certain
computationally hard problems in machine learning are identified as promising
directions for the field. Practical questions, like how to upload classical
data into quantum form, will also be addressed.Comment: v3 33 pages; typos corrected and references adde
Linear Time Feature Selection for Regularized Least-Squares
We propose a novel algorithm for greedy forward feature selection for
regularized least-squares (RLS) regression and classification, also known as
the least-squares support vector machine or ridge regression. The algorithm,
which we call greedy RLS, starts from the empty feature set, and on each
iteration adds the feature whose addition provides the best leave-one-out
cross-validation performance. Our method is considerably faster than the
previously proposed ones, since its time complexity is linear in the number of
training examples, the number of features in the original data set, and the
desired size of the set of selected features. Therefore, as a side effect we
obtain a new training algorithm for learning sparse linear RLS predictors which
can be used for large scale learning. This speed is possible due to matrix
calculus based short-cuts for leave-one-out and feature addition. We
experimentally demonstrate the scalability of our algorithm and its ability to
find good quality feature sets.Comment: 17 pages, 15 figure
- …