3,615 research outputs found
Symbol Emergence in Robotics: A Survey
Humans can learn the use of language through physical interaction with their
environment and semiotic communication with other people. It is very important
to obtain a computational understanding of how humans can form a symbol system
and obtain semiotic skills through their autonomous mental development.
Recently, many studies have been conducted on the construction of robotic
systems and machine-learning methods that can learn the use of language through
embodied multimodal interaction with their environment and other systems.
Understanding human social interactions and developing a robot that can
smoothly communicate with human users in the long term, requires an
understanding of the dynamics of symbol systems and is crucially important. The
embodied cognition and social interaction of participants gradually change a
symbol system in a constructive manner. In this paper, we introduce a field of
research called symbol emergence in robotics (SER). SER is a constructive
approach towards an emergent symbol system. The emergent symbol system is
socially self-organized through both semiotic communications and physical
interactions with autonomous cognitive developmental agents, i.e., humans and
developmental robots. Specifically, we describe some state-of-art research
topics concerning SER, e.g., multimodal categorization, word discovery, and a
double articulation analysis, that enable a robot to obtain words and their
embodied meanings from raw sensory--motor information, including visual
information, haptic information, auditory information, and acoustic speech
signals, in a totally unsupervised manner. Finally, we suggest future
directions of research in SER.Comment: submitted to Advanced Robotic
Transferable Pedestrian Motion Prediction Models at Intersections
One desirable capability of autonomous cars is to accurately predict the
pedestrian motion near intersections for safe and efficient trajectory
planning. We are interested in developing transfer learning algorithms that can
be trained on the pedestrian trajectories collected at one intersection and yet
still provide accurate predictions of the trajectories at another, previously
unseen intersection. We first discussed the feature selection for transferable
pedestrian motion models in general. Following this discussion, we developed
one transferable pedestrian motion prediction algorithm based on Inverse
Reinforcement Learning (IRL) that infers pedestrian intentions and predicts
future trajectories based on observed trajectory. We evaluated our algorithm on
a dataset collected at two intersections, trained at one intersection and
tested at the other intersection. We used the accuracy of augmented
semi-nonnegative sparse coding (ASNSC), trained and tested at the same
intersection as a baseline. The result shows that the proposed algorithm
improves the baseline accuracy by 40% in the non-transfer task, and 16% in the
transfer task
Human Motion Trajectory Prediction: A Survey
With growing numbers of intelligent autonomous systems in human environments,
the ability of such systems to perceive, understand and anticipate human
behavior becomes increasingly important. Specifically, predicting future
positions of dynamic agents and planning considering such predictions are key
tasks for self-driving vehicles, service robots and advanced surveillance
systems. This paper provides a survey of human motion trajectory prediction. We
review, analyze and structure a large selection of work from different
communities and propose a taxonomy that categorizes existing methods based on
the motion modeling approach and level of contextual information used. We
provide an overview of the existing datasets and performance metrics. We
discuss limitations of the state of the art and outline directions for further
research.Comment: Submitted to the International Journal of Robotics Research (IJRR),
37 page
- …