39 research outputs found

    Progressive Neural Networks

    Full text link
    Learning to solve complex sequences of tasks--while both leveraging transfer and avoiding catastrophic forgetting--remains a key obstacle to achieving human-level intelligence. The progressive networks approach represents a step forward in this direction: they are immune to forgetting and can leverage prior knowledge via lateral connections to previously learned features. We evaluate this architecture extensively on a wide variety of reinforcement learning tasks (Atari and 3D maze games), and show that it outperforms common baselines based on pretraining and finetuning. Using a novel sensitivity measure, we demonstrate that transfer occurs at both low-level sensory and high-level control layers of the learned policy

    Tell me why! Explanations support learning relational and causal structure

    Full text link
    Inferring the abstract relational and causal structure of the world is a major challenge for reinforcement-learning (RL) agents. For humans, language--particularly in the form of explanations--plays a considerable role in overcoming this challenge. Here, we show that language can play a similar role for deep RL agents in complex environments. While agents typically struggle to acquire relational and causal knowledge, augmenting their experience by training them to predict language descriptions and explanations can overcome these limitations. We show that language can help agents learn challenging relational tasks, and examine which aspects of language contribute to its benefits. We then show that explanations can help agents to infer not only relational but also causal structure. Language can shape the way that agents to generalize out-of-distribution from ambiguous, causally-confounded training, and explanations even allow agents to learn to perform experimental interventions to identify causal relationships. Our results suggest that language description and explanation may be powerful tools for improving agent learning and generalization.Comment: ICML 2022; 23 page

    Communications Biophysics

    Get PDF
    Contains research objectives and reports on six research projects split into three sections.National Institutes of Health (Grant 5 P01 NS13126-07)National Institutes of Health (Training Grant 5 T32 NS07047-05)National Institutes of Health (Training Grant 2 T32 NS07047-06)National Science Foundation (Grant BNS 77-16861)National Institutes of Health (Grant 5 R01 NS1284606)National Institutes of Health (Grant 5 T32 NS07099)National Science Foundation (Grant BNS77-21751)National Institutes of Health (Grant 5 R01 NS14092-04)Gallaudet College SubcontractKarmazin Foundation through the Council for the Arts at M.I.T.National Institutes of Health (Grant 1 R01 NS1691701A1)National Institutes of Health (Grant 5 R01 NS11080-06)National Institutes of Health (Grant GM-21189
    corecore