5,511 research outputs found
Ghost on the Shell: An Expressive Representation of General 3D Shapes
The creation of photorealistic virtual worlds requires the accurate modeling
of 3D surface geometry for a wide range of objects. For this, meshes are
appealing since they 1) enable fast physics-based rendering with realistic
material and lighting, 2) support physical simulation, and 3) are
memory-efficient for modern graphics pipelines. Recent work on reconstructing
and statistically modeling 3D shape, however, has critiqued meshes as being
topologically inflexible. To capture a wide range of object shapes, any 3D
representation must be able to model solid, watertight, shapes as well as thin,
open, surfaces. Recent work has focused on the former, and methods for
reconstructing open surfaces do not support fast reconstruction with material
and lighting or unconditional generative modelling. Inspired by the observation
that open surfaces can be seen as islands floating on watertight surfaces, we
parameterize open surfaces by defining a manifold signed distance field on
watertight templates. With this parameterization, we further develop a
grid-based and differentiable representation that parameterizes both watertight
and non-watertight meshes of arbitrary topology. Our new representation, called
Ghost-on-the-Shell (G-Shell), enables two important applications:
differentiable rasterization-based reconstruction from multiview images and
generative modelling of non-watertight meshes. We empirically demonstrate that
G-Shell achieves state-of-the-art performance on non-watertight mesh
reconstruction and generation tasks, while also performing effectively for
watertight meshes.Comment: Technical Report (26 pages, 16 figures, Project Page:
https://gshell3d.github.io/
Dynamics and Topological Aspects of a Reconstructed Two-Dimensional Foam Time Series Using Potts Model on a Pinned Lattice
We discuss a method to reconstruct an approximate two-dimensional foam
structure from an incomplete image using the extended Potts mode with a pinned
lattice we introduced in a previous paper. The initial information consists of
the positions of the vertices only. We locate the centers of the bubbles using
the Euclidean distance-map construction and assign at each vertex position a
continuous pinning field with a potential falling off as . We nucleate a
bubble at each center using the extended Potts model and let the structure
evolve under the constraint of scaled target areas until the bubbles contact
each other. The target area constraint and pinning centers prevent further
coarsening. We then turn the area constraint off and let the edges relax to a
minimum energy configuration. The result is a reconstructed structure very
close to the simulation. We repeated this procedure for various stages of the
coarsening of the same simulated foam and investigated the simulation and
reconstruction dynamics, topology and area distribution, finding that they
agree to good accuracy.Comment: 31 pages, 20 Postscript figures Accepted in the Journal of
Computational Physic
Towards Persistence-Based Reconstruction in Euclidean Spaces
Manifold reconstruction has been extensively studied for the last decade or
so, especially in two and three dimensions. Recently, significant improvements
were made in higher dimensions, leading to new methods to reconstruct large
classes of compact subsets of Euclidean space . However, the complexities
of these methods scale up exponentially with d, which makes them impractical in
medium or high dimensions, even for handling low-dimensional submanifolds. In
this paper, we introduce a novel approach that stands in-between classical
reconstruction and topological estimation, and whose complexity scales up with
the intrinsic dimension of the data. Specifically, when the data points are
sufficiently densely sampled from a smooth -submanifold of , our
method retrieves the homology of the submanifold in time at most ,
where is the size of the input and is a constant depending solely on
. It can also provably well handle a wide range of compact subsets of
, though with worse complexities. Along the way to proving the
correctness of our algorithm, we obtain new results on \v{C}ech, Rips, and
witness complex filtrations in Euclidean spaces
Learning single-image 3D reconstruction by generative modelling of shape, pose and shading
We present a unified framework tackling two problems: class-specific 3D
reconstruction from a single image, and generation of new 3D shape samples.
These tasks have received considerable attention recently; however, most
existing approaches rely on 3D supervision, annotation of 2D images with
keypoints or poses, and/or training with multiple views of each object
instance. Our framework is very general: it can be trained in similar settings
to existing approaches, while also supporting weaker supervision. Importantly,
it can be trained purely from 2D images, without pose annotations, and with
only a single view per instance. We employ meshes as an output representation,
instead of voxels used in most prior work. This allows us to reason over
lighting parameters and exploit shading information during training, which
previous 2D-supervised methods cannot. Thus, our method can learn to generate
and reconstruct concave object classes. We evaluate our approach in various
settings, showing that: (i) it learns to disentangle shape from pose and
lighting; (ii) using shading in the loss improves performance compared to just
silhouettes; (iii) when using a standard single white light, our model
outperforms state-of-the-art 2D-supervised methods, both with and without pose
supervision, thanks to exploiting shading cues; (iv) performance improves
further when using multiple coloured lights, even approaching that of
state-of-the-art 3D-supervised methods; (v) shapes produced by our model
capture smooth surfaces and fine details better than voxel-based approaches;
and (vi) our approach supports concave classes such as bathtubs and sofas,
which methods based on silhouettes cannot learn.Comment: Extension of arXiv:1807.09259, accepted to IJCV. Differentiable
renderer available at https://github.com/pmh47/dir
Measurement and Evaluation of Deep Learning Based 3D Reconstruction
Performances of Deep Learning (DL) based methods for 3D reconstruction are becoming at par or better than classical computer vision techniques. Learning requires data with proper annotations. While images have a standardized representation, there is currently no widely accepted format for efficiently representing 3D output shapes. The challenge lies in finding a format that can handle the high-resolution geometry of any shape while also being memory and computationally efficient. Therefore, most advanced learning-based 3D reconstructions are restricted to a certain domain. In this work, we compare the performance of different output representations for 3D reconstruction in different contexts including objects or natural scenes, full human body to human body parts reconstruction. Despite substantial progress in the semantic understanding of the visual world, there are few methods that can reconstruct from a single view for a s large set of objects. Our the objective is to investigate methods to reconstruct a wider variety of object categories in 3D and aim to achieve accurate 3D reconstruction at both object and scene levels. In this work, we compare the performance of different output representations for 3D reconstruction in such a way that will give us implicit and smooth output representation of complex geometry of 3D from RGB images, DICOM (Digital Imaging and Communications in Medicine) formatted MRI breast images and images from a wild environment in terms of input using the Deep Learning methods and available 3D processing applications (MeshLab, 3D Slicer, and Mayavi)
Mesh-based Autoencoders for Localized Deformation Component Analysis
Spatially localized deformation components are very useful for shape analysis
and synthesis in 3D geometry processing. Several methods have recently been
developed, with an aim to extract intuitive and interpretable deformation
components. However, these techniques suffer from fundamental limitations
especially for meshes with noise or large-scale deformations, and may not
always be able to identify important deformation components. In this paper we
propose a novel mesh-based autoencoder architecture that is able to cope with
meshes with irregular topology. We introduce sparse regularization in this
framework, which along with convolutional operations, helps localize
deformations. Our framework is capable of extracting localized deformation
components from mesh data sets with large-scale deformations and is robust to
noise. It also provides a nonlinear approach to reconstruction of meshes using
the extracted basis, which is more effective than the current linear
combination approach. Extensive experiments show that our method outperforms
state-of-the-art methods in both qualitative and quantitative evaluations
- …