18,050 research outputs found
End-to-end Learning of Driving Models from Large-scale Video Datasets
Robust perception-action models should be learned from training data with
diverse visual appearances and realistic behaviors, yet current approaches to
deep visuomotor policy learning have been generally limited to in-situ models
learned from a single vehicle or a simulation environment. We advocate learning
a generic vehicle motion model from large scale crowd-sourced video data, and
develop an end-to-end trainable architecture for learning to predict a
distribution over future vehicle egomotion from instantaneous monocular camera
observations and previous vehicle state. Our model incorporates a novel
FCN-LSTM architecture, which can be learned from large-scale crowd-sourced
vehicle action data, and leverages available scene segmentation side tasks to
improve performance under a privileged learning paradigm.Comment: camera ready for CVPR201
Learning with Latent Language
The named concepts and compositional operators present in natural language
provide a rich source of information about the kinds of abstractions humans use
to navigate the world. Can this linguistic background knowledge improve the
generality and efficiency of learned classifiers and control policies? This
paper aims to show that using the space of natural language strings as a
parameter space is an effective way to capture natural task structure. In a
pretraining phase, we learn a language interpretation model that transforms
inputs (e.g. images) into outputs (e.g. labels) given natural language
descriptions. To learn a new concept (e.g. a classifier), we search directly in
the space of descriptions to minimize the interpreter's loss on training
examples. Crucially, our models do not require language data to learn these
concepts: language is used only in pretraining to impose structure on
subsequent learning. Results on image classification, text editing, and
reinforcement learning show that, in all settings, models with a linguistic
parameterization outperform those without
MaestROB: A Robotics Framework for Integrated Orchestration of Low-Level Control and High-Level Reasoning
This paper describes a framework called MaestROB. It is designed to make the
robots perform complex tasks with high precision by simple high-level
instructions given by natural language or demonstration. To realize this, it
handles a hierarchical structure by using the knowledge stored in the forms of
ontology and rules for bridging among different levels of instructions.
Accordingly, the framework has multiple layers of processing components;
perception and actuation control at the low level, symbolic planner and Watson
APIs for cognitive capabilities and semantic understanding, and orchestration
of these components by a new open source robot middleware called Project Intu
at its core. We show how this framework can be used in a complex scenario where
multiple actors (human, a communication robot, and an industrial robot)
collaborate to perform a common industrial task. Human teaches an assembly task
to Pepper (a humanoid robot from SoftBank Robotics) using natural language
conversation and demonstration. Our framework helps Pepper perceive the human
demonstration and generate a sequence of actions for UR5 (collaborative robot
arm from Universal Robots), which ultimately performs the assembly (e.g.
insertion) task.Comment: IEEE International Conference on Robotics and Automation (ICRA) 2018.
Video: https://www.youtube.com/watch?v=19JsdZi0TW
Learning Robot Activities from First-Person Human Videos Using Convolutional Future Regression
We design a new approach that allows robot learning of new activities from
unlabeled human example videos. Given videos of humans executing the same
activity from a human's viewpoint (i.e., first-person videos), our objective is
to make the robot learn the temporal structure of the activity as its future
regression network, and learn to transfer such model for its own motor
execution. We present a new deep learning model: We extend the state-of-the-art
convolutional object detection network for the representation/estimation of
human hands in training videos, and newly introduce the concept of using a
fully convolutional network to regress (i.e., predict) the intermediate scene
representation corresponding to the future frame (e.g., 1-2 seconds later).
Combining these allows direct prediction of future locations of human hands and
objects, which enables the robot to infer the motor control plan using our
manipulation network. We experimentally confirm that our approach makes
learning of robot activities from unlabeled human interaction videos possible,
and demonstrate that our robot is able to execute the learned collaborative
activities in real-time directly based on its camera input
- …