9,656 research outputs found

    A 3D CAD assembly benchmark

    Get PDF
    Evaluating the effectiveness of the systems for the retrieval of 3D assembly models is not trivial. CAD assembly models can be considered similar according to different criteria and at different levels (i.e. globally or partially). Indeed, besides the shape criterion, CAD assembly models have further characteristic elements, such as the mutual position of parts, or the type of connecting joint. Thus, when retrieving 3D models, these characteristics can match in the entire model (globally) or just in local subparts (partially). The available 3D model repositories do not include complex CAD assembly models and, generally, they are suitable to evaluate one characteristic at a time and neglecting important properties in the evaluation of assembly similarity. In this paper, we present a benchmark for the evaluation of content-retrieval systems of 3D assembly models. A crucial feature of this benchmark regards its ability to consider the various aspects characterizing the models of mechanical assemblies

    T-LESS: An RGB-D Dataset for 6D Pose Estimation of Texture-less Objects

    Full text link
    We introduce T-LESS, a new public dataset for estimating the 6D pose, i.e. translation and rotation, of texture-less rigid objects. The dataset features thirty industry-relevant objects with no significant texture and no discriminative color or reflectance properties. The objects exhibit symmetries and mutual similarities in shape and/or size. Compared to other datasets, a unique property is that some of the objects are parts of others. The dataset includes training and test images that were captured with three synchronized sensors, specifically a structured-light and a time-of-flight RGB-D sensor and a high-resolution RGB camera. There are approximately 39K training and 10K test images from each sensor. Additionally, two types of 3D models are provided for each object, i.e. a manually created CAD model and a semi-automatically reconstructed one. Training images depict individual objects against a black background. Test images originate from twenty test scenes having varying complexity, which increases from simple scenes with several isolated objects to very challenging ones with multiple instances of several objects and with a high amount of clutter and occlusion. The images were captured from a systematically sampled view sphere around the object/scene, and are annotated with accurate ground truth 6D poses of all modeled objects. Initial evaluation results indicate that the state of the art in 6D object pose estimation has ample room for improvement, especially in difficult cases with significant occlusion. The T-LESS dataset is available online at cmp.felk.cvut.cz/t-less.Comment: WACV 201

    3D Printed Soft Robotic Hand

    Get PDF
    Soft robotics is an emerging industry, largely dominated by companies which hand mold their actuators. Our team set out to design an entirely 3D printed soft robotic hand, powered by a pneumatic control system which will prove both the capabilities of soft robots and those of 3D printing. Through research, computer aided design, finite element analysis, and experimental testing, a functioning actuator was created capable of a deflection of 2.17” at a maximum pressure input of 15 psi. The single actuator was expanded into a 4 finger gripper and the design was printed and assembled. The created prototype was ultimately able to lift both a 100-gram apple and a 4-gram pill, proving its functionality in two prominent industries: pharmaceutical and food packing

    3D ShapeNets: A Deep Representation for Volumetric Shapes

    Full text link
    3D shape is a crucial but heavily underutilized cue in today's computer vision systems, mostly due to the lack of a good generic shape representation. With the recent availability of inexpensive 2.5D depth sensors (e.g. Microsoft Kinect), it is becoming increasingly important to have a powerful 3D shape representation in the loop. Apart from category recognition, recovering full 3D shapes from view-based 2.5D depth maps is also a critical part of visual understanding. To this end, we propose to represent a geometric 3D shape as a probability distribution of binary variables on a 3D voxel grid, using a Convolutional Deep Belief Network. Our model, 3D ShapeNets, learns the distribution of complex 3D shapes across different object categories and arbitrary poses from raw CAD data, and discovers hierarchical compositional part representations automatically. It naturally supports joint object recognition and shape completion from 2.5D depth maps, and it enables active object recognition through view planning. To train our 3D deep learning model, we construct ModelNet -- a large-scale 3D CAD model dataset. Extensive experiments show that our 3D deep representation enables significant performance improvement over the-state-of-the-arts in a variety of tasks.Comment: to be appeared in CVPR 201
    • …
    corecore