146,762 research outputs found
Private Learning Implies Online Learning: An Efficient Reduction
We study the relationship between the notions of differentially private
learning and online learning in games. Several recent works have shown that
differentially private learning implies online learning, but an open problem of
Neel, Roth, and Wu \cite{NeelAaronRoth2018} asks whether this implication is
{\it efficient}. Specifically, does an efficient differentially private learner
imply an efficient online learner? In this paper we resolve this open question
in the context of pure differential privacy. We derive an efficient black-box
reduction from differentially private learning to online learning from expert
advice
Effective Informal Learning: Considerations For The Workplace
This article consists of an academic librarian\u27s suggestions for an individual wanting to be a successful informal Learner in the workplace. Examples of modes of communication, scholarly activity, and education are explored, in addition to helpful mindsets and practical strategies for becoming an efficient and effective informal Learner. Discussion is given concerning an individual\u27s responsibilities and the environmental factors necessary for success in this type of learning. Prevailing climates and attitudes by administrators and employers are examined in addition to how these factors might influence learning of this typ
Meta-learners for Estimating Heterogeneous Treatment Effects using Machine Learning
There is growing interest in estimating and analyzing heterogeneous treatment
effects in experimental and observational studies. We describe a number of
meta-algorithms that can take advantage of any supervised learning or
regression method in machine learning and statistics to estimate the
Conditional Average Treatment Effect (CATE) function. Meta-algorithms build on
base algorithms---such as Random Forests (RF), Bayesian Additive Regression
Trees (BART) or neural networks---to estimate the CATE, a function that the
base algorithms are not designed to estimate directly. We introduce a new
meta-algorithm, the X-learner, that is provably efficient when the number of
units in one treatment group is much larger than in the other, and can exploit
structural properties of the CATE function. For example, if the CATE function
is linear and the response functions in treatment and control are Lipschitz
continuous, the X-learner can still achieve the parametric rate under
regularity conditions. We then introduce versions of the X-learner that use RF
and BART as base learners. In extensive simulation studies, the X-learner
performs favorably, although none of the meta-learners is uniformly the best.
In two persuasion field experiments from political science, we demonstrate how
our new X-learner can be used to target treatment regimes and to shed light on
underlying mechanisms. A software package is provided that implements our
methods
Understanding the Role of Adaptivity in Machine Teaching: The Case of Version Space Learners
In real-world applications of education, an effective teacher adaptively
chooses the next example to teach based on the learner's current state.
However, most existing work in algorithmic machine teaching focuses on the
batch setting, where adaptivity plays no role. In this paper, we study the case
of teaching consistent, version space learners in an interactive setting. At
any time step, the teacher provides an example, the learner performs an update,
and the teacher observes the learner's new state. We highlight that adaptivity
does not speed up the teaching process when considering existing models of
version space learners, such as "worst-case" (the learner picks the next
hypothesis randomly from the version space) and "preference-based" (the learner
picks hypothesis according to some global preference). Inspired by human
teaching, we propose a new model where the learner picks hypotheses according
to some local preference defined by the current hypothesis. We show that our
model exhibits several desirable properties, e.g., adaptivity plays a key role,
and the learner's transitions over hypotheses are smooth/interpretable. We
develop efficient teaching algorithms and demonstrate our results via
simulation and user studies.Comment: NeurIPS 2018 (extended version
Efficient Optimal Learning for Contextual Bandits
We address the problem of learning in an online setting where the learner
repeatedly observes features, selects among a set of actions, and receives
reward for the action taken. We provide the first efficient algorithm with an
optimal regret. Our algorithm uses a cost sensitive classification learner as
an oracle and has a running time , where is the number
of classification rules among which the oracle might choose. This is
exponentially faster than all previous algorithms that achieve optimal regret
in this setting. Our formulation also enables us to create an algorithm with
regret that is additive rather than multiplicative in feedback delay as in all
previous work
Efficient Learning with Partially Observed Attributes
We describe and analyze efficient algorithms for learning a linear predictor
from examples when the learner can only view a few attributes of each training
example. This is the case, for instance, in medical research, where each
patient participating in the experiment is only willing to go through a small
number of tests. Our analysis bounds the number of additional examples
sufficient to compensate for the lack of full information on each training
example. We demonstrate the efficiency of our algorithms by showing that when
running on digit recognition data, they obtain a high prediction accuracy even
when the learner gets to see only four pixels of each image.Comment: This is a full version of the paper appearing in The 27th
International Conference on Machine Learning (ICML 2010
- …