26,029 research outputs found
Incremental Learning-to-Learn with Statistical Guarantees
In learning-to-learn the goal is to infer a learning algorithm that works
well on a class of tasks sampled from an unknown meta distribution. In contrast
to previous work on batch learning-to-learn, we consider a scenario where tasks
are presented sequentially and the algorithm needs to adapt incrementally to
improve its performance on future tasks. Key to this setting is for the
algorithm to rapidly incorporate new observations into the model as they
arrive, without keeping them in memory. We focus on the case where the
underlying algorithm is ridge regression parameterized by a positive
semidefinite matrix. We propose to learn this matrix by applying a stochastic
strategy to minimize the empirical error incurred by ridge regression on future
tasks sampled from the meta distribution. We study the statistical properties
of the proposed algorithm and prove non-asymptotic bounds on its excess
transfer risk, that is, the generalization performance on new tasks from the
same meta distribution. We compare our online learning-to-learn approach with a
state of the art batch method, both theoretically and empirically
Incremental learning-to-learn with statistical guarantees
In learning-to-learn the goal is to infer a learning algorithm that works well on a class of tasks sampled from an unknown metadistribution. In contrast to previous work on batch learning-to-learn, we consider a scenario where tasks are presented sequentially and the algorithm needs to adapt incrementally to improve its performance on future tasks. Key to this setting is for the algorithm to rapidly incorporate new observations into the model as they arrive, without keeping them in memory. We focus on the case where the underlying algorithm is Ridge Regression parametrised by a symmetric positive semidefinite matrix. We propose to learn this matrix by applying a stochastic strategy to minimize the empirical error incurred by Ridge Regression on future tasks sampled from the meta-distribution. We study the statistical properties of the proposed algorithm and prove non-asymptotic bounds on its excess transfer risk, that is, the generalization performance on new tasks from the same meta-distribution. We compare our online learning-to-learn approach with a state-of-the-art batch method, both theoretically and empirically
Learning an Approximate Model Predictive Controller with Guarantees
A supervised learning framework is proposed to approximate a model predictive
controller (MPC) with reduced computational complexity and guarantees on
stability and constraint satisfaction. The framework can be used for a wide
class of nonlinear systems. Any standard supervised learning technique (e.g.
neural networks) can be employed to approximate the MPC from samples. In order
to obtain closed-loop guarantees for the learned MPC, a robust MPC design is
combined with statistical learning bounds. The MPC design ensures robustness to
inaccurate inputs within given bounds, and Hoeffding's Inequality is used to
validate that the learned MPC satisfies these bounds with high confidence. The
result is a closed-loop statistical guarantee on stability and constraint
satisfaction for the learned MPC. The proposed learning-based MPC framework is
illustrated on a nonlinear benchmark problem, for which we learn a neural
network controller with guarantees.Comment: 6 pages, 3 figures, to appear in IEEE Control Systems Letter
Crowd-ML: A Privacy-Preserving Learning Framework for a Crowd of Smart Devices
Smart devices with built-in sensors, computational capabilities, and network
connectivity have become increasingly pervasive. The crowds of smart devices
offer opportunities to collectively sense and perform computing tasks in an
unprecedented scale. This paper presents Crowd-ML, a privacy-preserving machine
learning framework for a crowd of smart devices, which can solve a wide range
of learning problems for crowdsensing data with differential privacy
guarantees. Crowd-ML endows a crowdsensing system with an ability to learn
classifiers or predictors online from crowdsensing data privately with minimal
computational overheads on devices and servers, suitable for a practical and
large-scale employment of the framework. We analyze the performance and the
scalability of Crowd-ML, and implement the system with off-the-shelf
smartphones as a proof of concept. We demonstrate the advantages of Crowd-ML
with real and simulated experiments under various conditions
Private Incremental Regression
Data is continuously generated by modern data sources, and a recent challenge
in machine learning has been to develop techniques that perform well in an
incremental (streaming) setting. In this paper, we investigate the problem of
private machine learning, where as common in practice, the data is not given at
once, but rather arrives incrementally over time.
We introduce the problems of private incremental ERM and private incremental
regression where the general goal is to always maintain a good empirical risk
minimizer for the history observed under differential privacy. Our first
contribution is a generic transformation of private batch ERM mechanisms into
private incremental ERM mechanisms, based on a simple idea of invoking the
private batch ERM procedure at some regular time intervals. We take this
construction as a baseline for comparison. We then provide two mechanisms for
the private incremental regression problem. Our first mechanism is based on
privately constructing a noisy incremental gradient function, which is then
used in a modified projected gradient procedure at every timestep. This
mechanism has an excess empirical risk of , where is the
dimensionality of the data. While from the results of [Bassily et al. 2014]
this bound is tight in the worst-case, we show that certain geometric
properties of the input and constraint set can be used to derive significantly
better results for certain interesting regression problems.Comment: To appear in PODS 201
- …