2,103 research outputs found
Symbol Emergence in Robotics: A Survey
Humans can learn the use of language through physical interaction with their
environment and semiotic communication with other people. It is very important
to obtain a computational understanding of how humans can form a symbol system
and obtain semiotic skills through their autonomous mental development.
Recently, many studies have been conducted on the construction of robotic
systems and machine-learning methods that can learn the use of language through
embodied multimodal interaction with their environment and other systems.
Understanding human social interactions and developing a robot that can
smoothly communicate with human users in the long term, requires an
understanding of the dynamics of symbol systems and is crucially important. The
embodied cognition and social interaction of participants gradually change a
symbol system in a constructive manner. In this paper, we introduce a field of
research called symbol emergence in robotics (SER). SER is a constructive
approach towards an emergent symbol system. The emergent symbol system is
socially self-organized through both semiotic communications and physical
interactions with autonomous cognitive developmental agents, i.e., humans and
developmental robots. Specifically, we describe some state-of-art research
topics concerning SER, e.g., multimodal categorization, word discovery, and a
double articulation analysis, that enable a robot to obtain words and their
embodied meanings from raw sensory--motor information, including visual
information, haptic information, auditory information, and acoustic speech
signals, in a totally unsupervised manner. Finally, we suggest future
directions of research in SER.Comment: submitted to Advanced Robotic
Goal Set Inverse Optimal Control and Iterative Re-planning for Predicting Human Reaching Motions in Shared Workspaces
To enable safe and efficient human-robot collaboration in shared workspaces
it is important for the robot to predict how a human will move when performing
a task. While predicting human motion for tasks not known a priori is very
challenging, we argue that single-arm reaching motions for known tasks in
collaborative settings (which are especially relevant for manufacturing) are
indeed predictable. Two hypotheses underlie our approach for predicting such
motions: First, that the trajectory the human performs is optimal with respect
to an unknown cost function, and second, that human adaptation to their
partner's motion can be captured well through iterative re-planning with the
above cost function. The key to our approach is thus to learn a cost function
which "explains" the motion of the human. To do this, we gather example
trajectories from pairs of participants performing a collaborative assembly
task using motion capture. We then use Inverse Optimal Control to learn a cost
function from these trajectories. Finally, we predict reaching motions from the
human's current configuration to a task-space goal region by iteratively
re-planning a trajectory using the learned cost function. Our planning
algorithm is based on the trajectory optimizer STOMP, it plans for a 23 DoF
human kinematic model and accounts for the presence of a moving collaborator
and obstacles in the environment. Our results suggest that in most cases, our
method outperforms baseline methods when predicting motions. We also show that
our method outperforms baselines for predicting human motion when a human and a
robot share the workspace.Comment: 12 pages, Accepted for publication IEEE Transaction on Robotics 201
Integration of Action and Language Knowledge: A Roadmap for Developmental Robotics
“This material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder." “Copyright IEEE. Personal use of this material is permitted. However, permission to reprint/republish this material for advertising or promotional purposes or for creating new collective works for resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in other works must be obtained from the IEEE.”This position paper proposes that the study of embodied cognitive agents, such as humanoid robots, can advance our understanding of the cognitive development of complex sensorimotor, linguistic, and social learning skills. This in turn will benefit the design of cognitive robots capable of learning to handle and manipulate objects and tools autonomously, to cooperate and communicate with other robots and humans, and to adapt their abilities to changing internal, environmental, and social conditions. Four key areas of research challenges are discussed, specifically for the issues related to the understanding of: 1) how agents learn and represent compositional actions; 2) how agents learn and represent compositional lexica; 3) the dynamics of social interaction and learning; and 4) how compositional action and language representations are integrated to bootstrap the cognitive system. The review of specific issues and progress in these areas is then translated into a practical roadmap based on a series of milestones. These milestones provide a possible set of cognitive robotics goals and test scenarios, thus acting as a research roadmap for future work on cognitive developmental robotics.Peer reviewe
Environment-adaptive interaction primitives for human-robot motor skill learning
© 2016 IEEE. In complex environments where robots are expected to co-operate with human partners, it is vital for the robot to consider properties of their collaborative activity in addition to the behavior of its partner. In this paper, we propose to learn such complex interactive skills by observing the demonstrations of a human-robot team with additional external attributes. We propose Environment-adaptive Interaction Primitives (EalPs) as an extension of Interaction Primitives. In cooperation tasks between human and robot with different environmental conditions, EalPs not only improve the predicted motor skills of robot within a brief observed human motion, but also obtain the generalization ability to adapt to new environmental conditions by learning the relationships between each condition and the corresponding motor skills from training samples. Our method is validated in the collaborative task of covering objects by plastic bag with a humanoid Baxter robot. To achieve the task successfully, the robot needs to coordinate itself to its partner while also considering information about the object to be covered
A dynamic neural field approach to natural and efficient human-robot collaboration
A major challenge in modern robotics is the design of autonomous robots
that are able to cooperate with people in their daily tasks in a human-like way. We
address the challenge of natural human-robot interactions by using the theoretical
framework of dynamic neural fields (DNFs) to develop processing architectures that
are based on neuro-cognitive mechanisms supporting human joint action. By explaining
the emergence of self-stabilized activity in neuronal populations, dynamic
field theory provides a systematic way to endow a robot with crucial cognitive functions
such as working memory, prediction and decision making . The DNF architecture
for joint action is organized as a large scale network of reciprocally connected
neuronal populations that encode in their firing patterns specific motor behaviors,
action goals, contextual cues and shared task knowledge. Ultimately, it implements
a context-dependent mapping from observed actions of the human onto adequate
complementary behaviors that takes into account the inferred goal of the co-actor.
We present results of flexible and fluent human-robot cooperation in a task in which
the team has to assemble a toy object from its components.The present research was conducted in the context of the fp6-IST2 EU-IP
Project JAST (proj. nr. 003747) and partly financed by the FCT grants POCI/V.5/A0119/2005 and
CONC-REEQ/17/2001. We would like to thank Luis Louro, Emanuel Sousa, Flora Ferreira, Eliana
Costa e Silva, Rui Silva and Toni Machado for their assistance during the robotic experiment
Adaptive modular architectures for rich motor skills: technical report on the cognitive architecture
- …