47,440 research outputs found

    CAD-based 3-D object recognition

    Get PDF
    Journal ArticleWe propose an approach to 3-D object recognition using CAD-based geometry models for freeform surfaces. Geometry is modeled with rational B-splines by defining surface patches and then combining these into a volumetric model of the object. Characteristic features are then extracted from this model and subjected to a battery of tests to select an "optimal" subset of surface features which are robust with respect to the sensor being used (e.g. laser range finder versus passive stereo) and permit recognition of the object from any viewing position. These features are then organized into a "strategy tree" which defines the order in which the features are sought, and any corroboration required to justify issuing a hypotheses. We propose the use of geometric sensor data integration techniques as a means for formally selecting surface features on free-form objects in order to build recognition strategies. Previous work has dealt with polyhedra and generalized cylinders, whereas here we propose to apply the method to more general surfaces

    Model-Based Three-Dimensional Object Recognition and Localization Using Properties of Surface Curvatures.

    Get PDF
    The ability to recognize three-dimensional (3-D) objects accurately from range images is a fundamental goal of vision in robotics. This facility is important in automated manufacturing environments in industry. In contrast to the extensive work done in computer-aided design and manufacturing (CAD/CAM), the robotic process is primitive and ad hoc. This thesis defines and investigates a fundamental problem in robot vision systems: recognizing and localizing multiple free-form 3-D objects in range images. An effective and efficient approach is developed and implemented as a system Free-form Object Recognition and Localization (FORL). The technique used for surface characterization is surface curvatures derived from geometric models of objects. It uniquely defines surface shapes in conjunction with a knowledge representation scheme which is used in the search for corresponding surfaces of an objects. Model representation has a significant effect on model-based recognition. Without using surface properties, many important industrial vision tasks would remain beyond the competence of machine vision. Knowledge about model surface shapes is automatically abstracted from CAD models, and the CAD models are also used directly in the vision process. The knowledge representation scheme eases the processes of acquisition, retrieval, modification and reasoning so that the recognition and localization process is effective and efficient. Our approach is to recognize objects by hypothesizing and locating objects. The knowledge about the object surface shapes is used to infer the hypotheses and the CAD models are used to locate the objects. Therefore, localization becomes a by-product of the recognition process, which is significant since localization of an object is necessary in robotic applications. One of the most important problems in 3-D machine vision is the recognition of objects from their partial view due to occlusion. Our approach is surface-based, thus, sensitive to neither noise nor occlusion. For the same reason, surface-based recognition also makes the multiple object recognition easier. Our approach uses appropriate strategies for recognition and localization of 3-D solids by using the information from the CAD database, which makes the integration of robot vision systems with CAD/CAM systems a promising future

    Deep Exemplar 2D-3D Detection by Adapting from Real to Rendered Views

    Full text link
    This paper presents an end-to-end convolutional neural network (CNN) for 2D-3D exemplar detection. We demonstrate that the ability to adapt the features of natural images to better align with those of CAD rendered views is critical to the success of our technique. We show that the adaptation can be learned by compositing rendered views of textured object models on natural images. Our approach can be naturally incorporated into a CNN detection pipeline and extends the accuracy and speed benefits from recent advances in deep learning to 2D-3D exemplar detection. We applied our method to two tasks: instance detection, where we evaluated on the IKEA dataset, and object category detection, where we out-perform Aubry et al. for "chair" detection on a subset of the Pascal VOC dataset.Comment: To appear in CVPR 201

    Automated retrieval of 3D CAD model objects in construction range images

    Get PDF

    3D ShapeNets: A Deep Representation for Volumetric Shapes

    Full text link
    3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representations automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet -- a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.Comment: to be appeared in CVPR 201
    corecore