41 research outputs found
End-to-end Learning of Driving Models from Large-scale Video Datasets
Robust perception-action models should be learned from training data with
diverse visual appearances and realistic behaviors, yet current approaches to
deep visuomotor policy learning have been generally limited to in-situ models
learned from a single vehicle or a simulation environment. We advocate learning
a generic vehicle motion model from large scale crowd-sourced video data, and
develop an end-to-end trainable architecture for learning to predict a
distribution over future vehicle egomotion from instantaneous monocular camera
observations and previous vehicle state. Our model incorporates a novel
FCN-LSTM architecture, which can be learned from large-scale crowd-sourced
vehicle action data, and leverages available scene segmentation side tasks to
improve performance under a privileged learning paradigm.Comment: camera ready for CVPR201
A Wearable Robotic Hand for Hand-over-Hand Imitation Learning
Dexterous manipulation through imitation learning has gained significant
attention in robotics research. The collection of high-quality expert data
holds paramount importance when using imitation learning. The existing
approaches for acquiring expert data commonly involve utilizing a data glove to
capture hand motion information. However, this method suffers from limitations
as the collected information cannot be directly mapped to the robotic hand due
to discrepancies in their degrees of freedom or structures. Furthermore,it
fails to accurately capture force feedback information between the hand and
objects during the demonstration process. To overcome these challenges, this
paper presents a novel solution in the form of a wearable dexterous hand,
namely Hand-over-hand Imitation learning wearable RObotic Hand (HIRO
Hand),which integrates expert data collection and enables the implementation of
dexterous operations. This HIRO Hand empowers the operator to utilize their own
tactile feedback to determine appropriate force, position, and actions,
resulting in more accurate imitation of the expert's actions. We develop both
non-learning and visual behavior cloning based controllers allowing HIRO Hand
successfully achieves grasping and in-hand manipulation ability.Comment: 7 page