76 research outputs found
Selecting Near-Optimal Learners via Incremental Data Allocation
We study a novel machine learning (ML) problem setting of sequentially
allocating small subsets of training data amongst a large set of classifiers.
The goal is to select a classifier that will give near-optimal accuracy when
trained on all data, while also minimizing the cost of misallocated samples.
This is motivated by large modern datasets and ML toolkits with many
combinations of learning algorithms and hyper-parameters. Inspired by the
principle of "optimism under uncertainty," we propose an innovative strategy,
Data Allocation using Upper Bounds (DAUB), which robustly achieves these
objectives across a variety of real-world datasets.
We further develop substantial theoretical support for DAUB in an idealized
setting where the expected accuracy of a classifier trained on samples can
be known exactly. Under these conditions we establish a rigorous sub-linear
bound on the regret of the approach (in terms of misallocated data), as well as
a rigorous bound on suboptimality of the selected classifier. Our accuracy
estimates using real-world datasets only entail mild violations of the
theoretical scenario, suggesting that the practical behavior of DAUB is likely
to approach the idealized behavior.Comment: AAAI-2016: The Thirtieth AAAI Conference on Artificial Intelligenc
Hybrid Reinforcement Learning with Expert State Sequences
Existing imitation learning approaches often require that the complete
demonstration data, including sequences of actions and states, are available.
In this paper, we consider a more realistic and difficult scenario where a
reinforcement learning agent only has access to the state sequences of an
expert, while the expert actions are unobserved. We propose a novel
tensor-based model to infer the unobserved actions of the expert state
sequences. The policy of the agent is then optimized via a hybrid objective
combining reinforcement learning and imitation learning. We evaluated our
hybrid approach on an illustrative domain and Atari games. The empirical
results show that (1) the agents are able to leverage state expert sequences to
learn faster than pure reinforcement learning baselines, (2) our tensor-based
action inference model is advantageous compared to standard deep neural
networks in inferring expert actions, and (3) the hybrid policy optimization
objective is robust against noise in expert state sequences.Comment: AAAI 2019; https://github.com/XiaoxiaoGuo/tensor4r
- …