3 research outputs found
ROBOT LEARNING OF OBJECT MANIPULATION TASK ACTIONS FROM HUMAN DEMONSTRATIONS
Robot learning from demonstration is a method which enables robots to learn in a similar way as humans. In this paper, a framework that enables robots to learn from multiple human demonstrations via kinesthetic teaching is presented. The subject of learning is a high-level sequence of actions, as well as the low-level trajectories necessary to be followed by the robot to perform the object manipulation task. The multiple human demonstrations are recorded and only the most similar demonstrations are selected for robot learning. The high-level learning module identifies the sequence of actions of the demonstrated task. Using Dynamic Time Warping (DTW) and Gaussian Mixture Model (GMM), the model of demonstrated trajectories is learned. The learned trajectory is generated by Gaussian mixture regression (GMR) from the learned Gaussian mixture model. In online working phase, the sequence of actions is identified and experimental results show that the robot performs the learned task successfully
Scenario-Transferable Semantic Graph Reasoning for Interaction-Aware Probabilistic Prediction
Accurately predicting the possible behaviors of traffic participants is an
essential capability for autonomous vehicles. Since autonomous vehicles need to
navigate in dynamically changing environments, they are expected to make
accurate predictions regardless of where they are and what driving
circumstances they encountered. A number of methodologies have been proposed to
solve prediction problems under different traffic situations. However, these
works either focus on one particular driving scenario (e.g. highway,
intersection, or roundabout) or do not take sufficient environment information
(e.g. road topology, traffic rules, and surrounding agents) into account. In
fact, the limitation to certain scenario is mainly due to the lackness of
generic representations of the environment. The insufficiency of environment
information further limits the flexibility and transferability of the
predictor. In this paper, we propose a scenario-transferable and
interaction-aware probabilistic prediction algorithm based on semantic graph
reasoning. We first introduce generic representations for both static and
dynamic elements in driving environments. Then these representations are
utilized to describe semantic goals for selected agents and incorporate them
into spatial-temporal structures. Finally, we reason internal relations among
these structured semantic representations using learning-based method and
obtain prediction results. The proposed algorithm is thoroughly examined under
several complicated real-world driving scenarios to demonstrate its flexibility
and transferability, where the predictor can be directly used under unforeseen
driving circumstances with different static and dynamic information.Comment: 17 pages, 11 figure