3,946 research outputs found
Compact Model Representation for 3D Reconstruction
3D reconstruction from 2D images is a central problem in computer vision.
Recent works have been focusing on reconstruction directly from a single image.
It is well known however that only one image cannot provide enough information
for such a reconstruction. A prior knowledge that has been entertained are 3D
CAD models due to its online ubiquity. A fundamental question is how to
compactly represent millions of CAD models while allowing generalization to new
unseen objects with fine-scaled geometry. We introduce an approach to compactly
represent a 3D mesh. Our method first selects a 3D model from a graph structure
by using a novel free-form deformation FFD 3D-2D registration, and then the
selected 3D model is refined to best fit the image silhouette. We perform a
comprehensive quantitative and qualitative analysis that demonstrates
impressive dense and realistic 3D reconstruction from single images.Comment: 9 pages, 6 figure
DeformNet: Free-Form Deformation Network for 3D Shape Reconstruction from a Single Image
3D reconstruction from a single image is a key problem in multiple
applications ranging from robotic manipulation to augmented reality. Prior
methods have tackled this problem through generative models which predict 3D
reconstructions as voxels or point clouds. However, these methods can be
computationally expensive and miss fine details. We introduce a new
differentiable layer for 3D data deformation and use it in DeformNet to learn a
model for 3D reconstruction-through-deformation. DeformNet takes an image
input, searches the nearest shape template from a database, and deforms the
template to match the query image. We evaluate our approach on the ShapeNet
dataset and show that - (a) the Free-Form Deformation layer is a powerful new
building block for Deep Learning models that manipulate 3D data (b) DeformNet
uses this FFD layer combined with shape retrieval for smooth and
detail-preserving 3D reconstruction of qualitatively plausible point clouds
with respect to a single query image (c) compared to other state-of-the-art 3D
reconstruction methods, DeformNet quantitatively matches or outperforms their
benchmarks by significant margins. For more information, visit:
https://deformnet-site.github.io/DeformNet-website/ .Comment: 11 pages, 9 figures, NIP
Repairing triangle meshes built from scanned point cloud
The Reverse Engineering process consists of a succession of operations that aim at creating a digital representation of a physical model. The reconstructed geometric model is often a triangle mesh built from a point cloud acquired with a scanner. Depending on both the object complexity and the scanning process, some areas of the object outer surface may never be accessible, thus inducing some deficiencies in the point cloud and, as a consequence, some holes in the resulting mesh. This is simply not acceptable in an integrated design process where the geometric models are often shared between the various applications (e.g. design, simulation, manufacturing). In this paper, we propose a complete toolbox to fill in these undesirable holes. The hole contour is first cleaned to remove badly-shaped triangles that are due to the scanner noise. A topological grid is then inserted and deformed to satisfy blending conditions with the surrounding mesh. In our approach, the shape of the inserted mesh results from the minimization of a quadratic function based on a linear mechanical model that is used to approximate the curvature variation between the inner and surrounding meshes. Additional geometric constraints can also be specified to further shape the inserted mesh. The proposed approach is illustrated with some examples coming from our prototype software
Parameterization adaption for 3D shape optimization in aerodynamics
When solving a PDE problem numerically, a certain mesh-refinement process is
always implicit, and very classically, mesh adaptivity is a very effective
means to accelerate grid convergence. Similarly, when optimizing a shape by
means of an explicit geometrical representation, it is natural to seek for an
analogous concept of parameterization adaptivity. We propose here an adaptive
parameterization for three-dimensional optimum design in aerodynamics by using
the so-called "Free-Form Deformation" approach based on 3D tensorial B\'ezier
parameterization. The proposed procedure leads to efficient numerical
simulations with highly reduced computational costs
- …