20 research outputs found
PvDeConv: Point-Voxel Deconvolution for Autoencoding CAD Construction in 3D
We propose a Point-Voxel DeConvolution (PVDeConv) module for 3D data
autoencoder. To demonstrate its efficiency we learn to synthesize
high-resolution point clouds of 10k points that densely describe the underlying
geometry of Computer Aided Design (CAD) models. Scanning artifacts, such as
protrusions, missing parts, smoothed edges and holes, inevitably appear in real
3D scans of fabricated CAD objects. Learning the original CAD model
construction from a 3D scan requires a ground truth to be available together
with the corresponding 3D scan of an object. To solve the gap, we introduce a
new dedicated dataset, the CC3D, containing 50k+ pairs of CAD models and their
corresponding 3D meshes. This dataset is used to learn a convolutional
autoencoder for point clouds sampled from the pairs of 3D scans - CAD models.
The challenges of this new dataset are demonstrated in comparison with other
generative point cloud sampling models trained on ShapeNet. The CC3D
autoencoder is efficient with respect to memory consumption and training time
as compared to stateof-the-art models for 3D data generation.Comment: 2020 IEEE International Conference on Image Processing (ICIP
SepicNet: Sharp Edges Recovery by Parametric Inference of Curves in 3D Shapes
3D scanning as a technique to digitize objects in reality and create their 3D
models, is used in many fields and areas. Though the quality of 3D scans
depends on the technical characteristics of the 3D scanner, the common drawback
is the smoothing of fine details, or the edges of an object. We introduce
SepicNet, a novel deep network for the detection and parametrization of sharp
edges in 3D shapes as primitive curves. To make the network end-to-end
trainable, we formulate the curve fitting in a differentiable manner. We
develop an adaptive point cloud sampling technique that captures the sharp
features better than uniform sampling. The experiments were conducted on a
newly introduced large-scale dataset of 50k 3D scans, where the sharp edge
annotations were extracted from their parametric CAD models, and demonstrate
significant improvement over state-of-the-art methods
CAD-SIGNet: CAD Language Inference from Point Clouds using Layer-wise Sketch Instance Guided Attention
Reverse engineering in the realm of Computer-Aided Design (CAD) has been a
longstanding aspiration, though not yet entirely realized. Its primary aim is
to uncover the CAD process behind a physical object given its 3D scan. We
propose CAD-SIGNet, an end-to-end trainable and auto-regressive architecture to
recover the design history of a CAD model represented as a sequence of
sketch-and-extrusion from an input point cloud. Our model learns
visual-language representations by layer-wise cross-attention between point
cloud and CAD language embedding. In particular, a new Sketch instance Guided
Attention (SGA) module is proposed in order to reconstruct the fine-grained
details of the sketches. Thanks to its auto-regressive nature, CAD-SIGNet not
only reconstructs a unique full design history of the corresponding CAD model
given an input point cloud but also provides multiple plausible design choices.
This allows for an interactive reverse engineering scenario by providing
designers with multiple next-step choices along with the design process.
Extensive experiments on publicly available CAD datasets showcase the
effectiveness of our approach against existing baseline models in two settings,
namely, full design history recovery and conditional auto-completion from point
clouds
TransCAD: A Hierarchical Transformer for CAD Sequence Inference from Point Clouds
peer reviewed3D reverse engineering, in which a CAD model is inferred given a 3D scan of a
physical object, is a research direction that offers many promising practical
applications. This paper proposes TransCAD, an end-to-end transformer-based
architecture that predicts the CAD sequence from a point cloud. TransCAD
leverages the structure of CAD sequences by using a hierarchical learning
strategy. A loop refiner is also introduced to regress sketch primitive
parameters. Rigorous experimentation on the DeepCAD and Fusion360 datasets show
that TransCAD achieves state-of-the-art results. The result analysis is
supported with a proposed metric for CAD sequence, the mean Average Precision
of CAD Sequence, that addresses the limitations of existing metrics.IF/17052459/CASCADESBRIDGES2021/IS/16849599/FREE-3
Towards Automatic Human Body Model Fitting to a 3D Scan
This paper presents a method to automatically recover a realistic and accurate body shape of a
person wearing clothing from a 3D scan. Indeed, in many practical situations, people are scanned
wearing clothing. The underlying body shape is thus partially or completely occluded. Yet, it is very
desirable to recover the shape of a covered body as it provides non-invasive means of measuring and
analysing it. This is particularly convenient for patients in medical applications, customers in a retail
shop, as well as in security applications where suspicious objects under clothing are to be detected.
To recover the body shape from the 3D scan of a person in any pose, a human body model is usually
fitted to the scan. Current methods rely on the manual placement of markers on the body to identify
anatomical locations and guide the pose fitting. The markers are either physically placed on the body
before scanning or placed in software as a postprocessing step. Some other methods detect key
points on the scan using 3D feature descriptors to automate the placement of markers. They usually
require a large database of 3D scans. We propose to automatically estimate the body pose of a
person from a 3D mesh acquired by standard 3D body scanners, with or without texture. To fit a
human model to the scan, we use joint locations as anchors. These are detected from multiple 2D
views using a conventional body joint detector working on images. In contrast to existing approaches,
the proposed method is fully automatic, and takes advantage of the robustness of state-of-art 2D joint
detectors. The proposed approach is validated on scans of people in different poses wearing
garments of various thicknesses and on scans of one person in multiple poses with known ground
truth wearing close-fitting clothing
BODYFITR: Robust Automatic 3D Human Body Fitting
This paper proposes BODYFITR, a fully automatic method to fit a human body model to static 3D scans with complex poses. Automatic and reliable 3D human body fitting is necessary for many applications related to healthcare, digital ergonomics, avatar creation and security, especially in industrial contexts for large-scale product design. Existing works either make prior assumptions on the pose, require manual annotation of the data or have difficulty handling complex poses. This work addresses these limitations by providing a novel automatic fitting pipeline with carefully integrated building blocks designed for a systematic and robust approach. It is validated on the 3DBodyTex dataset, with hundreds of high-quality 3D body scans, and shown to outperform prior works in static body pose and shape estimation, qualitatively and quantitatively. The method is also applied to the creation of realistic 3D avatars from the high-quality texture scans of 3DBodyTex, further demonstrating its capabilities
CAD-SIGNet: CAD Language Inference from Point Clouds using Layer-wise Sketch Instance Guided Attention
peer reviewedReverse engineering in the realm of Computer-Aided Design (CAD) has been a
longstanding aspiration, though not yet entirely realized. Its primary aim is
to uncover the CAD process behind a physical object given its 3D scan. We
propose CAD-SIGNet, an end-to-end trainable and auto-regressive architecture to
recover the design history of a CAD model represented as a sequence of
sketch-and-extrusion from an input point cloud. Our model learns
visual-language representations by layer-wise cross-attention between point
cloud and CAD language embedding. In particular, a new Sketch instance Guided
Attention (SGA) module is proposed in order to reconstruct the fine-grained
details of the sketches. Thanks to its auto-regressive nature, CAD-SIGNet not
only reconstructs a unique full design history of the corresponding CAD model
given an input point cloud but also provides multiple plausible design choices.
This allows for an interactive reverse engineering scenario by providing
designers with multiple next-step choices along with the design process.
Extensive experiments on publicly available CAD datasets showcase the
effectiveness of our approach against existing baseline models in two settings,
namely, full design history recovery and conditional auto-completion from point
clouds
CADOps-Net: Jointly Learning CAD Operation Types and Steps from Boundary-Representations
peer reviewed3D reverse engineering is a long sought-after, yet not completely achieved goal in the Computer-Aided Design (CAD) industry. The objective is to recover the construction history of a CAD model. Starting from a Boundary Representation (B-Rep) of a CAD model, this paper proposes a new deep neural network, CADOps-Net, that jointly learns the CAD operation types and the decomposition into different CAD operation steps. This joint learning allows to divide a B-Rep into parts that were created by various types of CAD operations at the same construction step; therefore providing relevant information for further recovery of the design history. Furthermore, we propose the novel CC3D-Ops dataset that includes over 37k CAD models annotated with CAD operation type labels and step labels. Compared to existing datasets, the complexity and variety of CC3D-Ops models are closer to those used for industrial purposes. Our experiments, conducted on the proposed CC3D-Ops and the publicly available Fusion360 datasets, demonstrate the competitive performance of CADOps-Net with respect to state-of-the-art, and confirm the importance of the joint learning of CAD operation types and steps