24 research outputs found
Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity
In deep learning, models typically reuse the same parameters for all inputs.
Mixture of Experts (MoE) defies this and instead selects different parameters
for each incoming example. The result is a sparsely-activated model -- with
outrageous numbers of parameters -- but a constant computational cost. However,
despite several notable successes of MoE, widespread adoption has been hindered
by complexity, communication costs and training instability -- we address these
with the Switch Transformer. We simplify the MoE routing algorithm and design
intuitive improved models with reduced communication and computational costs.
Our proposed training techniques help wrangle the instabilities and we show
large sparse models may be trained, for the first time, with lower precision
(bfloat16) formats. We design models based off T5-Base and T5-Large to obtain
up to 7x increases in pre-training speed with the same computational resources.
These improvements extend into multilingual settings where we measure gains
over the mT5-Base version across all 101 languages. Finally, we advance the
current scale of language models by pre-training up to trillion parameter
models on the "Colossal Clean Crawled Corpus" and achieve a 4x speedup over the
T5-XXL model
Algorithmic Improvements for Deep Reinforcement Learning applied to Interactive Fiction
Text-based games are a natural challenge domain for deep reinforcement
learning algorithms. Their state and action spaces are combinatorially large,
their reward function is sparse, and they are partially observable: the agent
is informed of the consequences of its actions through textual feedback. In
this paper we emphasize this latter point and consider the design of a deep
reinforcement learning agent that can play from feedback alone. Our design
recognizes and takes advantage of the structural characteristics of text-based
games. We first propose a contextualisation mechanism, based on accumulated
reward, which simplifies the learning problem and mitigates partial
observability. We then study different methods that rely on the notion that
most actions are ineffectual in any given situation, following Zahavy et al.'s
idea of an admissible action. We evaluate these techniques in a series of
text-based games of increasing difficulty based on the TextWorld framework, as
well as the iconic game Zork. Empirically, we find that these techniques
improve the performance of a baseline deep reinforcement learning agent applied
to text-based games.Comment: To appear in Proceedings of the Thirty-Fourth AAAI Conference on
Artificial Intelligence (AAAI-20). Accepted for Oral presentatio