14,818 research outputs found
PoseCNN: A Convolutional Neural Network for 6D Object Pose Estimation in Cluttered Scenes
Estimating the 6D pose of known objects is important for robots to interact
with the real world. The problem is challenging due to the variety of objects
as well as the complexity of a scene caused by clutter and occlusions between
objects. In this work, we introduce PoseCNN, a new Convolutional Neural Network
for 6D object pose estimation. PoseCNN estimates the 3D translation of an
object by localizing its center in the image and predicting its distance from
the camera. The 3D rotation of the object is estimated by regressing to a
quaternion representation. We also introduce a novel loss function that enables
PoseCNN to handle symmetric objects. In addition, we contribute a large scale
video dataset for 6D object pose estimation named the YCB-Video dataset. Our
dataset provides accurate 6D poses of 21 objects from the YCB dataset observed
in 92 videos with 133,827 frames. We conduct extensive experiments on our
YCB-Video dataset and the OccludedLINEMOD dataset to show that PoseCNN is
highly robust to occlusions, can handle symmetric objects, and provide accurate
pose estimation using only color images as input. When using depth data to
further refine the poses, our approach achieves state-of-the-art results on the
challenging OccludedLINEMOD dataset. Our code and dataset are available at
https://rse-lab.cs.washington.edu/projects/posecnn/.Comment: Accepted to RSS 201
Feature Mapping for Learning Fast and Accurate 3D Pose Inference from Synthetic Images
We propose a simple and efficient method for exploiting synthetic images when
training a Deep Network to predict a 3D pose from an image. The ability of
using synthetic images for training a Deep Network is extremely valuable as it
is easy to create a virtually infinite training set made of such images, while
capturing and annotating real images can be very cumbersome. However, synthetic
images do not resemble real images exactly, and using them for training can
result in suboptimal performance. It was recently shown that for exemplar-based
approaches, it is possible to learn a mapping from the exemplar representations
of real images to the exemplar representations of synthetic images. In this
paper, we show that this approach is more general, and that a network can also
be applied after the mapping to infer a 3D pose: At run time, given a real
image of the target object, we first compute the features for the image, map
them to the feature space of synthetic images, and finally use the resulting
features as input to another network which predicts the 3D pose. Since this
network can be trained very effectively by using synthetic images, it performs
very well in practice, and inference is faster and more accurate than with an
exemplar-based approach. We demonstrate our approach on the LINEMOD dataset for
3D object pose estimation from color images, and the NYU dataset for 3D hand
pose estimation from depth maps. We show that it allows us to outperform the
state-of-the-art on both datasets.Comment: CVPR 201
Event-based Vision: A Survey
Event cameras are bio-inspired sensors that differ from conventional frame
cameras: Instead of capturing images at a fixed rate, they asynchronously
measure per-pixel brightness changes, and output a stream of events that encode
the time, location and sign of the brightness changes. Event cameras offer
attractive properties compared to traditional cameras: high temporal resolution
(in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low
power consumption, and high pixel bandwidth (on the order of kHz) resulting in
reduced motion blur. Hence, event cameras have a large potential for robotics
and computer vision in challenging scenarios for traditional cameras, such as
low-latency, high speed, and high dynamic range. However, novel methods are
required to process the unconventional output of these sensors in order to
unlock their potential. This paper provides a comprehensive overview of the
emerging field of event-based vision, with a focus on the applications and the
algorithms developed to unlock the outstanding properties of event cameras. We
present event cameras from their working principle, the actual sensors that are
available and the tasks that they have been used for, from low-level vision
(feature detection and tracking, optic flow, etc.) to high-level vision
(reconstruction, segmentation, recognition). We also discuss the techniques
developed to process events, including learning-based techniques, as well as
specialized processors for these novel sensors, such as spiking neural
networks. Additionally, we highlight the challenges that remain to be tackled
and the opportunities that lie ahead in the search for a more efficient,
bio-inspired way for machines to perceive and interact with the world
- …