2,739 research outputs found
RotationNet: Joint Object Categorization and Pose Estimation Using Multiviews from Unsupervised Viewpoints
We propose a Convolutional Neural Network (CNN)-based model "RotationNet,"
which takes multi-view images of an object as input and jointly estimates its
pose and object category. Unlike previous approaches that use known viewpoint
labels for training, our method treats the viewpoint labels as latent
variables, which are learned in an unsupervised manner during the training
using an unaligned object dataset. RotationNet is designed to use only a
partial set of multi-view images for inference, and this property makes it
useful in practical scenarios where only partial views are available. Moreover,
our pose alignment strategy enables one to obtain view-specific feature
representations shared across classes, which is important to maintain high
accuracy in both object categorization and pose estimation. Effectiveness of
RotationNet is demonstrated by its superior performance to the state-of-the-art
methods of 3D object classification on 10- and 40-class ModelNet datasets. We
also show that RotationNet, even trained without known poses, achieves the
state-of-the-art performance on an object pose estimation dataset. The code is
available on https://github.com/kanezaki/rotationnetComment: 24 pages, 23 figures. Accepted to CVPR 201
3D Object Class Detection in the Wild
Object class detection has been a synonym for 2D bounding box localization
for the longest time, fueled by the success of powerful statistical learning
techniques, combined with robust image representations. Only recently, there
has been a growing interest in revisiting the promise of computer vision from
the early days: to precisely delineate the contents of a visual scene, object
by object, in 3D. In this paper, we draw from recent advances in object
detection and 2D-3D object lifting in order to design an object class detector
that is particularly tailored towards 3D object class detection. Our 3D object
class detection method consists of several stages gradually enriching the
object detection output with object viewpoint, keypoints and 3D shape
estimates. Following careful design, in each stage it constantly improves the
performance and achieves state-ofthe-art performance in simultaneous 2D
bounding box and viewpoint estimation on the challenging Pascal3D+ dataset
Empiricism without Magic: Transformational Abstraction in Deep Convolutional Neural Networks
In artificial intelligence, recent research has demonstrated the remarkable potential of Deep Convolutional Neural Networks (DCNNs), which seem to exceed state-of-the-art performance in new domains weekly, especially on the sorts of very difficult perceptual discrimination tasks that skeptics thought would remain beyond the reach of artificial intelligence. However, it has proven difficult to explain why DCNNs perform so well. In philosophy of mind, empiricists have long suggested that complex cognition is based on information derived from sensory experience, often appealing to a faculty of abstraction. Rationalists have frequently complained, however, that empiricists never adequately explained how this faculty of abstraction actually works. In this paper, I tie these two questions together, to the mutual benefit of both disciplines. I argue that the architectural features that distinguish DCNNs from earlier neural networks allow them to implement a form of hierarchical processing that I call “transformational abstraction”. Transformational abstraction iteratively converts sensory-based representations of category exemplars into new formats that are increasingly tolerant to “nuisance variation” in input. Reflecting upon the way that DCNNs leverage a combination of linear and non-linear processing to efficiently accomplish this feat allows us to understand how the brain is capable of bi-directional travel between exemplars and abstractions, addressing longstanding problems in empiricist philosophy of mind. I end by considering the prospects for future research on DCNNs, arguing that rather than simply implementing 80s connectionism with more brute-force computation, transformational abstraction counts as a qualitatively distinct form of processing ripe with philosophical and psychological significance, because it is significantly better suited to depict the generic mechanism responsible for this important kind of psychological processing in the brain
Data-Driven Grasp Synthesis - A Survey
We review the work on data-driven grasp synthesis and the methodologies for
sampling and ranking candidate grasps. We divide the approaches into three
groups based on whether they synthesize grasps for known, familiar or unknown
objects. This structure allows us to identify common object representations and
perceptual processes that facilitate the employed data-driven grasp synthesis
technique. In the case of known objects, we concentrate on the approaches that
are based on object recognition and pose estimation. In the case of familiar
objects, the techniques use some form of a similarity matching to a set of
previously encountered objects. Finally for the approaches dealing with unknown
objects, the core part is the extraction of specific features that are
indicative of good grasps. Our survey provides an overview of the different
methodologies and discusses open problems in the area of robot grasping. We
also draw a parallel to the classical approaches that rely on analytic
formulations.Comment: 20 pages, 30 Figures, submitted to IEEE Transactions on Robotic
Hashmod: A Hashing Method for Scalable 3D Object Detection
We present a scalable method for detecting objects and estimating their 3D
poses in RGB-D data. To this end, we rely on an efficient representation of
object views and employ hashing techniques to match these views against the
input frame in a scalable way. While a similar approach already exists for 2D
detection, we show how to extend it to estimate the 3D pose of the detected
objects. In particular, we explore different hashing strategies and identify
the one which is more suitable to our problem. We show empirically that the
complexity of our method is sublinear with the number of objects and we enable
detection and pose estimation of many 3D objects with high accuracy while
outperforming the state-of-the-art in terms of runtime.Comment: BMVC 201
- …