5,300 research outputs found
Embodied imitation-enhanced reinforcement learning in multi-agent systems
Imitation is an example of social learning in which an individual observes and copies another's actions. This paper presents a new method for using imitation as a way of enhancing the learning speed of individual agents that employ a well-known reinforcement learning algorithm, namely Q-learning. Compared with other research that uses imitation with reinforcement learning, our method uses imitation of purely observed behaviours to enhance learning, with no internal state access or sharing of experiences between agents. The paper evaluates our imitation-enhanced reinforcement learning approach in both simulation and with real robots in continuous space. Both simulation and real robot experimental results show that the learning speed of the group is improved. © The Author(s) 2013
On the evolution of behaviors through embodied imitation
© 2015 Massachusetts Institute of Technology. Abstract This article describes research in which embodied imitation and behavioral adaptation are investigated in collective robotics. We model social learning in artificial agents with real robots. The robots are able to observe and learn each others' movement patterns using their on-board sensors only, so that imitation is embodied. We show that the variations that arise from embodiment allow certain behaviors that are better adapted to the process of imitation to emerge and evolve during multiple cycles of imitation. As these behaviors are more robust to uncertainties in the real robots' sensors and actuators, they can be learned by other members of the collective with higher fidelity. Three different types of learned-behavior memory have been experimentally tested to investigate the effect of memory capacity on the evolution of movement patterns, and results show that as the movement patterns evolve through multiple cycles of imitation, selection, and variation, the robots are able to, in a sense, agree on the structure of the behaviors that are imitated
A Two-stage Fine-tuning Strategy for Generalizable Manipulation Skill of Embodied AI
The advent of Chat-GPT has led to a surge of interest in Embodied AI.
However, many existing Embodied AI models heavily rely on massive interactions
with training environments, which may not be practical in real-world
situations. To this end, the Maniskill2 has introduced a full-physics
simulation benchmark for manipulating various 3D objects. This benchmark
enables agents to be trained using diverse datasets of demonstrations and
evaluates their ability to generalize to unseen scenarios in testing
environments. In this paper, we propose a novel two-stage fine-tuning strategy
that aims to further enhance the generalization capability of our model based
on the Maniskill2 benchmark. Through extensive experiments, we demonstrate the
effectiveness of our approach by achieving the 1st prize in all three tracks of
the ManiSkill2 Challenge. Our findings highlight the potential of our method to
improve the generalization abilities of Embodied AI models and pave the way for
their ractical applications in real-world scenarios. All codes and models of
our solution is available at https://github.com/xtli12/GXU-LIPE.gitComment: 5 pages, 2 figures, 5 tables, accept by Robotics: Science and Systems
2023 - Workshop Interdisciplinary Exploration of Generalizable Manipulation
Policy Learning:Paradigms and Debate
A Survey on Transformers in Reinforcement Learning
Transformer has been considered the dominating neural architecture in NLP and
CV, mostly under supervised settings. Recently, a similar surge of using
Transformers has appeared in the domain of reinforcement learning (RL), but it
is faced with unique design choices and challenges brought by the nature of RL.
However, the evolution of Transformers in RL has not yet been well unraveled.
In this paper, we seek to systematically review motivations and progress on
using Transformers in RL, provide a taxonomy on existing works, discuss each
sub-field, and summarize future prospects
Large Language Models for Robotics: A Survey
The human ability to learn, generalize, and control complex manipulation
tasks through multi-modality feedback suggests a unique capability, which we
refer to as dexterity intelligence. Understanding and assessing this
intelligence is a complex task. Amidst the swift progress and extensive
proliferation of large language models (LLMs), their applications in the field
of robotics have garnered increasing attention. LLMs possess the ability to
process and generate natural language, facilitating efficient interaction and
collaboration with robots. Researchers and engineers in the field of robotics
have recognized the immense potential of LLMs in enhancing robot intelligence,
human-robot interaction, and autonomy. Therefore, this comprehensive review
aims to summarize the applications of LLMs in robotics, delving into their
impact and contributions to key areas such as robot control, perception,
decision-making, and path planning. We first provide an overview of the
background and development of LLMs for robotics, followed by a description of
the benefits of LLMs for robotics and recent advancements in robotics models
based on LLMs. We then delve into the various techniques used in the model,
including those employed in perception, decision-making, control, and
interaction. Finally, we explore the applications of LLMs in robotics and some
potential challenges they may face in the near future. Embodied intelligence is
the future of intelligent science, and LLMs-based robotics is one of the
promising but challenging paths to achieve this.Comment: Preprint. 4 figures, 3 table
- …