8,599 research outputs found
Learning the Semantics of Manipulation Action
In this paper we present a formal computational framework for modeling
manipulation actions. The introduced formalism leads to semantics of
manipulation action and has applications to both observing and understanding
human manipulation actions as well as executing them with a robotic mechanism
(e.g. a humanoid robot). It is based on a Combinatory Categorial Grammar. The
goal of the introduced framework is to: (1) represent manipulation actions with
both syntax and semantic parts, where the semantic part employs
-calculus; (2) enable a probabilistic semantic parsing schema to learn
the -calculus representation of manipulation action from an annotated
action corpus of videos; (3) use (1) and (2) to develop a system that visually
observes manipulation actions and understands their meaning while it can reason
beyond observations using propositional logic and axiom schemata. The
experiments conducted on a public available large manipulation action dataset
validate the theoretical framework and our implementation
Translating Videos to Commands for Robotic Manipulation with Deep Recurrent Neural Networks
We present a new method to translate videos to commands for robotic
manipulation using Deep Recurrent Neural Networks (RNN). Our framework first
extracts deep features from the input video frames with a deep Convolutional
Neural Networks (CNN). Two RNN layers with an encoder-decoder architecture are
then used to encode the visual features and sequentially generate the output
words as the command. We demonstrate that the translation accuracy can be
improved by allowing a smooth transaction between two RNN layers and using the
state-of-the-art feature extractor. The experimental results on our new
challenging dataset show that our approach outperforms recent methods by a fair
margin. Furthermore, we combine the proposed translation module with the vision
and planning system to let a robot perform various manipulation tasks. Finally,
we demonstrate the effectiveness of our framework on a full-size humanoid robot
WALK-MAN
Recommended from our members
Proceedings of QG2010: The Third Workshop on Question Generation
These are the peer-reviewed proceedings of "QG2010, The Third Workshop on Question Generation". The workshop included a special track for "QGSTEC2010: The First Question Generation Shared Task and Evaluation Challenge".
QG2010 was held as part of The Tenth International Conference on Intelligent Tutoring Systems (ITS2010)
Multi Sentence Description of Complex Manipulation Action Videos
Automatic video description requires the generation of natural language
statements about the actions, events, and objects in the video. An important
human trait, when we describe a video, is that we are able to do this with
variable levels of detail. Different from this, existing approaches for
automatic video descriptions are mostly focused on single sentence generation
at a fixed level of detail. Instead, here we address video description of
manipulation actions where different levels of detail are required for being
able to convey information about the hierarchical structure of these actions
relevant also for modern approaches of robot learning. We propose one hybrid
statistical and one end-to-end framework to address this problem. The hybrid
method needs much less data for training, because it models statistically
uncertainties within the video clips, while in the end-to-end method, which is
more data-heavy, we are directly connecting the visual encoder to the language
decoder without any intermediate (statistical) processing step. Both frameworks
use LSTM stacks to allow for different levels of description granularity and
videos can be described by simple single-sentences or complex multiple-sentence
descriptions. In addition, quantitative results demonstrate that these methods
produce more realistic descriptions than other competing approaches
- …