27,826 research outputs found
Pose Induction for Novel Object Categories
We address the task of predicting pose for objects of unannotated object
categories from a small seed set of annotated object classes. We present a
generalized classifier that can reliably induce pose given a single instance of
a novel category. In case of availability of a large collection of novel
instances, our approach then jointly reasons over all instances to improve the
initial estimates. We empirically validate the various components of our
algorithm and quantitatively show that our method produces reliable pose
estimates. We also show qualitative results on a diverse set of classes and
further demonstrate the applicability of our system for learning shape models
of novel object classes
Hybrid Bayesian Eigenobjects: Combining Linear Subspace and Deep Network Methods for 3D Robot Vision
We introduce Hybrid Bayesian Eigenobjects (HBEOs), a novel representation for
3D objects designed to allow a robot to jointly estimate the pose, class, and
full 3D geometry of a novel object observed from a single viewpoint in a single
practical framework. By combining both linear subspace methods and deep
convolutional prediction, HBEOs efficiently learn nonlinear object
representations without directly regressing into high-dimensional space. HBEOs
also remove the onerous and generally impractical necessity of input data
voxelization prior to inference. We experimentally evaluate the suitability of
HBEOs to the challenging task of joint pose, class, and shape inference on
novel objects and show that, compared to preceding work, HBEOs offer
dramatically improved performance in all three tasks along with several orders
of magnitude faster runtime performance.Comment: To appear in the International Conference on Intelligent Robots
(IROS) - Madrid, 201
Neural Task Programming: Learning to Generalize Across Hierarchical Tasks
In this work, we propose a novel robot learning framework called Neural Task
Programming (NTP), which bridges the idea of few-shot learning from
demonstration and neural program induction. NTP takes as input a task
specification (e.g., video demonstration of a task) and recursively decomposes
it into finer sub-task specifications. These specifications are fed to a
hierarchical neural program, where bottom-level programs are callable
subroutines that interact with the environment. We validate our method in three
robot manipulation tasks. NTP achieves strong generalization across sequential
tasks that exhibit hierarchal and compositional structures. The experimental
results show that NTP learns to generalize well to- wards unseen tasks with
increasing lengths, variable topologies, and changing objectives.Comment: ICRA 201
- …