17 research outputs found

    PvDeConv: Point-Voxel Deconvolution for Autoencoding CAD Construction in 3D

    Full text link
    We propose a Point-Voxel DeConvolution (PVDeConv) module for 3D data autoencoder. To demonstrate its efficiency we learn to synthesize high-resolution point clouds of 10k points that densely describe the underlying geometry of Computer Aided Design (CAD) models. Scanning artifacts, such as protrusions, missing parts, smoothed edges and holes, inevitably appear in real 3D scans of fabricated CAD objects. Learning the original CAD model construction from a 3D scan requires a ground truth to be available together with the corresponding 3D scan of an object. To solve the gap, we introduce a new dedicated dataset, the CC3D, containing 50k+ pairs of CAD models and their corresponding 3D meshes. This dataset is used to learn a convolutional autoencoder for point clouds sampled from the pairs of 3D scans - CAD models. The challenges of this new dataset are demonstrated in comparison with other generative point cloud sampling models trained on ShapeNet. The CC3D autoencoder is efficient with respect to memory consumption and training time as compared to stateof-the-art models for 3D data generation.Comment: 2020 IEEE International Conference on Image Processing (ICIP

    SepicNet: Sharp Edges Recovery by Parametric Inference of Curves in 3D Shapes

    Full text link
    3D scanning as a technique to digitize objects in reality and create their 3D models, is used in many fields and areas. Though the quality of 3D scans depends on the technical characteristics of the 3D scanner, the common drawback is the smoothing of fine details, or the edges of an object. We introduce SepicNet, a novel deep network for the detection and parametrization of sharp edges in 3D shapes as primitive curves. To make the network end-to-end trainable, we formulate the curve fitting in a differentiable manner. We develop an adaptive point cloud sampling technique that captures the sharp features better than uniform sampling. The experiments were conducted on a newly introduced large-scale dataset of 50k 3D scans, where the sharp edge annotations were extracted from their parametric CAD models, and demonstrate significant improvement over state-of-the-art methods

    Towards Automatic Human Body Model Fitting to a 3D Scan

    Get PDF
    This paper presents a method to automatically recover a realistic and accurate body shape of a person wearing clothing from a 3D scan. Indeed, in many practical situations, people are scanned wearing clothing. The underlying body shape is thus partially or completely occluded. Yet, it is very desirable to recover the shape of a covered body as it provides non-invasive means of measuring and analysing it. This is particularly convenient for patients in medical applications, customers in a retail shop, as well as in security applications where suspicious objects under clothing are to be detected. To recover the body shape from the 3D scan of a person in any pose, a human body model is usually fitted to the scan. Current methods rely on the manual placement of markers on the body to identify anatomical locations and guide the pose fitting. The markers are either physically placed on the body before scanning or placed in software as a postprocessing step. Some other methods detect key points on the scan using 3D feature descriptors to automate the placement of markers. They usually require a large database of 3D scans. We propose to automatically estimate the body pose of a person from a 3D mesh acquired by standard 3D body scanners, with or without texture. To fit a human model to the scan, we use joint locations as anchors. These are detected from multiple 2D views using a conventional body joint detector working on images. In contrast to existing approaches, the proposed method is fully automatic, and takes advantage of the robustness of state-of-art 2D joint detectors. The proposed approach is validated on scans of people in different poses wearing garments of various thicknesses and on scans of one person in multiple poses with known ground truth wearing close-fitting clothing

    BODYFITR: Robust Automatic 3D Human Body Fitting

    Get PDF
    This paper proposes BODYFITR, a fully automatic method to fit a human body model to static 3D scans with complex poses. Automatic and reliable 3D human body fitting is necessary for many applications related to healthcare, digital ergonomics, avatar creation and security, especially in industrial contexts for large-scale product design. Existing works either make prior assumptions on the pose, require manual annotation of the data or have difficulty handling complex poses. This work addresses these limitations by providing a novel automatic fitting pipeline with carefully integrated building blocks designed for a systematic and robust approach. It is validated on the 3DBodyTex dataset, with hundreds of high-quality 3D body scans, and shown to outperform prior works in static body pose and shape estimation, qualitatively and quantitatively. The method is also applied to the creation of realistic 3D avatars from the high-quality texture scans of 3DBodyTex, further demonstrating its capabilities

    CADOps-Net: Jointly Learning CAD Operation Types and Steps from Boundary-Representations

    Get PDF
    peer reviewed3D reverse engineering is a long sought-after, yet not completely achieved goal in the Computer-Aided Design (CAD) industry. The objective is to recover the construction history of a CAD model. Starting from a Boundary Representation (B-Rep) of a CAD model, this paper proposes a new deep neural network, CADOps-Net, that jointly learns the CAD operation types and the decomposition into different CAD operation steps. This joint learning allows to divide a B-Rep into parts that were created by various types of CAD operations at the same construction step; therefore providing relevant information for further recovery of the design history. Furthermore, we propose the novel CC3D-Ops dataset that includes over 37k CAD models annotated with CAD operation type labels and step labels. Compared to existing datasets, the complexity and variety of CC3D-Ops models are closer to those used for industrial purposes. Our experiments, conducted on the proposed CC3D-Ops and the publicly available Fusion360 datasets, demonstrate the competitive performance of CADOps-Net with respect to state-of-the-art, and confirm the importance of the joint learning of CAD operation types and steps

    3DBodyTex: Textured 3D Body Dataset

    Get PDF
    In this paper, a dataset, named 3DBodyTex, of static 3D body scans with high-quality texture information is presented along with a fully automatic method for body model fitting to a 3D scan. 3D shape modelling is a fundamental area of computer vision that has a wide range of applications in the industry. It is becoming even more important as 3D sensing technologies are entering consumer devices such as smartphones. As the main output of these sensors is the 3D shape, many methods rely on this information alone. The 3D shape information is, however, very high dimensional and leads to models that must handle many degrees of freedom from limited information. Coupling texture and 3D shape alleviates this burden, as the texture of 3D objects is complementary to their shape. Unfortunately, high-quality texture content is lacking from commonly available datasets, and in particular in datasets of 3D body scans. The proposed 3DBodyTex dataset aims to fill this gap with hundreds of high-quality 3D body scans with high-resolution texture. Moreover, a novel fully automatic pipeline to fit a body model to a 3D scan is proposed. It includes a robust 3D landmark estimator that takes advantage of the high-resolution texture of 3DBodyTex. The pipeline is applied to the scans, and the results are reported and discussed, showcasing the diversity of the features in the dataset

    SHARP Challenge 2023: Solving CAD History and pArameters Recovery from Point clouds and 3D scans. Overview, Datasets, Metrics, and Baselines.

    Get PDF
    peer reviewedRecent breakthroughs in geometric Deep Learning (DL) and the availability of large Computer-Aided Design (CAD) datasets have advanced the research on learning CAD modeling processes and relating them to real objects. In this context, 3D reverse engineering of CAD models from 3D scans is considered to be one of the most sought-after goals for the CAD industry. However, recent efforts assume multiple simplifications limiting the applications in real-world settings. The SHARP Challenge 2023 aims at pushing the research a step closer to the real-world scenario of CAD reverse engineering from 3D scans through dedicated datasets and tracks. In this paper, we define the proposed SHARP 2023 tracks, describe the provided datasets, and propose a set of baseline methods along with suitable evaluation metrics to assess the performance of the track solutions. All proposed datasets along with useful routines and the evaluation metrics are publicly available

    Disentangled Face Identity Representations for joint 3D Face Recognition and Expression Neutralisation

    No full text
    In this paper, we propose a new deep learning-based approach for disentangling face identity representations from expressive 3D faces. Given a 3D face, our approach not only extracts a disentangled identity representation but also generates a realistic 3D face with a neutral expression while predicting its identity. The proposed network consists of three components; (1) a Graph Convolutional Autoencoder (GCA) to encode the 3D faces into latent representations, (2) a Generative Adversarial Network (GAN) that translates the latent representations of expressive faces into those of neutral faces, (3) and an identity recognition sub-network taking advantage of the neutralized latent representations for 3D face recognition. The whole network is trained in an end-to-end manner. Experiments are conducted on three publicly available datasets showing the effectiveness of the proposed approach
    corecore