143,243 research outputs found
Object Recognition in 3D Scenes with Occlusions and Clutter by Hough Voting
Abstract—In this work we propose a novel Hough voting approach for the detection of free-form shapes in a 3D space, to be used for object recognition tasks in 3D scenes with a significant degree of occlusion and clutter. The proposed method relies on matching 3D features to accumulate evidence of the presence of the objects being sought in a 3D Hough space. We validate our proposal by presenting a quantitative experimental comparison with state-of-the-art methods as well as by showing how our method enables 3D object recognition from real-time stereo data. Keywords-Hough voting; 3D object recognition; surface matching; I
Recommended from our members
From on-line sketching to 2D and 3D geometry: A fuzzy knowledge based system
The paper describes the development of a fuzzy knowledge based prototype system for conceptual design. This real time system is designed to infer user’s sketching intentions, to segment sketched input and generate corresponding geometric primitives: straight lines, circles, arcs, ellipses, elliptical arcs, and B-spline curves. Topology information (connectivity, unitary constraints and pairwise constraints) is received dynamically from 2D sketched input and primitives. From the 2D topology information, a more accurate 2D geometry can be built up by applying a 2D geometric constraint solver. Subsequently, 3D geometry can be received feature by feature incrementally. Each feature can be recognised by inference knowledge in terms of matching its 2D primitive configurations and connection relationships. The system accepts not only sketched input, working as an automatic design tools, but also accepts user’s interactive input of both 2D primitives and special positional 3D primitives. This makes it easy and friendly to use. The system has been tested with a number of sketched inputs of 2D and 3D geometry
Point Pair Feature based Object Detection for Random Bin Picking
Point pair features are a popular representation for free form 3D object
detection and pose estimation. In this paper, their performance in an
industrial random bin picking context is investigated. A new method to generate
representative synthetic datasets is proposed. This allows to investigate the
influence of a high degree of clutter and the presence of self similar
features, which are typical to our application. We provide an overview of
solutions proposed in literature and discuss their strengths and weaknesses. A
simple heuristic method to drastically reduce the computational complexity is
introduced, which results in improved robustness, speed and accuracy compared
to the naive approach
Teaching Compositionality to CNNs
Convolutional neural networks (CNNs) have shown great success in computer
vision, approaching human-level performance when trained for specific tasks via
application-specific loss functions. In this paper, we propose a method for
augmenting and training CNNs so that their learned features are compositional.
It encourages networks to form representations that disentangle objects from
their surroundings and from each other, thereby promoting better
generalization. Our method is agnostic to the specific details of the
underlying CNN to which it is applied and can in principle be used with any
CNN. As we show in our experiments, the learned representations lead to feature
activations that are more localized and improve performance over
non-compositional baselines in object recognition tasks.Comment: Preprint appearing in CVPR 201
DeformNet: Free-Form Deformation Network for 3D Shape Reconstruction from a Single Image
3D reconstruction from a single image is a key problem in multiple
applications ranging from robotic manipulation to augmented reality. Prior
methods have tackled this problem through generative models which predict 3D
reconstructions as voxels or point clouds. However, these methods can be
computationally expensive and miss fine details. We introduce a new
differentiable layer for 3D data deformation and use it in DeformNet to learn a
model for 3D reconstruction-through-deformation. DeformNet takes an image
input, searches the nearest shape template from a database, and deforms the
template to match the query image. We evaluate our approach on the ShapeNet
dataset and show that - (a) the Free-Form Deformation layer is a powerful new
building block for Deep Learning models that manipulate 3D data (b) DeformNet
uses this FFD layer combined with shape retrieval for smooth and
detail-preserving 3D reconstruction of qualitatively plausible point clouds
with respect to a single query image (c) compared to other state-of-the-art 3D
reconstruction methods, DeformNet quantitatively matches or outperforms their
benchmarks by significant margins. For more information, visit:
https://deformnet-site.github.io/DeformNet-website/ .Comment: 11 pages, 9 figures, NIP
Compact Model Representation for 3D Reconstruction
3D reconstruction from 2D images is a central problem in computer vision.
Recent works have been focusing on reconstruction directly from a single image.
It is well known however that only one image cannot provide enough information
for such a reconstruction. A prior knowledge that has been entertained are 3D
CAD models due to its online ubiquity. A fundamental question is how to
compactly represent millions of CAD models while allowing generalization to new
unseen objects with fine-scaled geometry. We introduce an approach to compactly
represent a 3D mesh. Our method first selects a 3D model from a graph structure
by using a novel free-form deformation FFD 3D-2D registration, and then the
selected 3D model is refined to best fit the image silhouette. We perform a
comprehensive quantitative and qualitative analysis that demonstrates
impressive dense and realistic 3D reconstruction from single images.Comment: 9 pages, 6 figure
- …