22,504 research outputs found
Shape Animation with Combined Captured and Simulated Dynamics
We present a novel volumetric animation generation framework to create new
types of animations from raw 3D surface or point cloud sequence of captured
real performances. The framework considers as input time incoherent 3D
observations of a moving shape, and is thus particularly suitable for the
output of performance capture platforms. In our system, a suitable virtual
representation of the actor is built from real captures that allows seamless
combination and simulation with virtual external forces and objects, in which
the original captured actor can be reshaped, disassembled or reassembled from
user-specified virtual physics. Instead of using the dominant surface-based
geometric representation of the capture, which is less suitable for volumetric
effects, our pipeline exploits Centroidal Voronoi tessellation decompositions
as unified volumetric representation of the real captured actor, which we show
can be used seamlessly as a building block for all processing stages, from
capture and tracking to virtual physic simulation. The representation makes no
human specific assumption and can be used to capture and re-simulate the actor
with props or other moving scenery elements. We demonstrate the potential of
this pipeline for virtual reanimation of a real captured event with various
unprecedented volumetric visual effects, such as volumetric distortion,
erosion, morphing, gravity pull, or collisions
SurfelMeshing: Online Surfel-Based Mesh Reconstruction
We address the problem of mesh reconstruction from live RGB-D video, assuming
a calibrated camera and poses provided externally (e.g., by a SLAM system). In
contrast to most existing approaches, we do not fuse depth measurements in a
volume but in a dense surfel cloud. We asynchronously (re)triangulate the
smoothed surfels to reconstruct a surface mesh. This novel approach enables to
maintain a dense surface representation of the scene during SLAM which can
quickly adapt to loop closures. This is possible by deforming the surfel cloud
and asynchronously remeshing the surface where necessary. The surfel-based
representation also naturally supports strongly varying scan resolution. In
particular, it reconstructs colors at the input camera's resolution. Moreover,
in contrast to many volumetric approaches, ours can reconstruct thin objects
since objects do not need to enclose a volume. We demonstrate our approach in a
number of experiments, showing that it produces reconstructions that are
competitive with the state-of-the-art, and we discuss its advantages and
limitations. The algorithm (excluding loop closure functionality) is available
as open source at https://github.com/puzzlepaint/surfelmeshing .Comment: Version accepted to IEEE Transactions on Pattern Analysis and Machine
Intelligenc
Content-based Propagation of User Markings for Interactive Segmentation of Patterned Images
Efficient and easy segmentation of images and volumes is of great practical
importance. Segmentation problems that motivate our approach originate from
microscopy imaging commonly used in materials science, medicine, and biology.
We formulate image segmentation as a probabilistic pixel classification
problem, and we apply segmentation as a step towards characterising image
content. Our method allows the user to define structures of interest by
interactively marking a subset of pixels. Thanks to the real-time feedback, the
user can place new markings strategically, depending on the current outcome.
The final pixel classification may be obtained from a very modest user input.
An important ingredient of our method is a graph that encodes image content.
This graph is built in an unsupervised manner during initialisation and is
based on clustering of image features. Since we combine a limited amount of
user-labelled data with the clustering information obtained from the unlabelled
parts of the image, our method fits in the general framework of semi-supervised
learning. We demonstrate how this can be a very efficient approach to
segmentation through pixel classification.Comment: 9 pages, 7 figures, PDFLaTe
Volumetric Untrimming: Precise decomposition of trimmed trivariates into tensor products
3D objects, modeled using Computer Aided Geometric Design tools, are
traditionally represented using a boundary representation (B-rep), and
typically use spline functions to parameterize these boundary surfaces.
However, recent development in physical analysis, in isogeometric analysis
(IGA) in specific, necessitates a volumetric parametrization of the interior of
the object. IGA is performed directly by integrating over the spline spaces of
the volumetric spline representation of the object. Typically, tensor-product
B-spline trivariates are used to parameterize the volumetric domain. A general
3D object, that can be modeled in contemporary B-rep CAD tools, is typically
represented using trimmed B-spline surfaces. In order to capture the generality
of the contemporary B-rep modeling space, while supporting IGA needs, Massarwi
and Elber (2016) proposed the use of trimmed trivariates volumetric elements.
However, the use of trimmed geometry makes the integration process more
difficult since integration over trimmed B-spline basis functions is a highly
challenging task. In this work, we propose an algorithm that precisely
decomposes a trimmed B-spline trivariate into a set of (singular only on the
boundary) tensor-product B-spline trivariates, that can be utilized to simplify
the integration process in IGA. The trimmed B-spline trivariate is first
subdivided into a set of trimmed B\'ezier trivariates, at all its internal
knots. Then, each trimmed B\'ezier trivariate, is decomposed into a set of
mutually exclusive tensor-product B-spline trivariates, that precisely cover
the entire trimmed domain. This process, denoted untrimming, can be performed
in either the Euclidean space or the parametric space of the trivariate. We
present examples on complex trimmed trivariates' based geometry, and we
demonstrate the effectiveness of the method by applying IGA over the
(untrimmed) results.Comment: 18 pages, 32 figures. Contribution accepted in International
Conference on Geometric Modeling and Processing (GMP 2019
Hierarchical Surface Prediction for 3D Object Reconstruction
Recently, Convolutional Neural Networks have shown promising results for 3D
geometry prediction. They can make predictions from very little input data such
as a single color image. A major limitation of such approaches is that they
only predict a coarse resolution voxel grid, which does not capture the surface
of the objects well. We propose a general framework, called hierarchical
surface prediction (HSP), which facilitates prediction of high resolution voxel
grids. The main insight is that it is sufficient to predict high resolution
voxels around the predicted surfaces. The exterior and interior of the objects
can be represented with coarse resolution voxels. Our approach is not dependent
on a specific input type. We show results for geometry prediction from color
images, depth images and shape completion from partial voxel grids. Our
analysis shows that our high resolution predictions are more accurate than low
resolution predictions.Comment: 3DV 201
3D Geometric Analysis of Tubular Objects based on Surface Normal Accumulation
This paper proposes a simple and efficient method for the reconstruction and
extraction of geometric parameters from 3D tubular objects. Our method
constructs an image that accumulates surface normal information, then peaks
within this image are located by tracking. Finally, the positions of these are
optimized to lie precisely on the tubular shape centerline. This method is very
versatile, and is able to process various input data types like full or partial
mesh acquired from 3D laser scans, 3D height map or discrete volumetric images.
The proposed algorithm is simple to implement, contains few parameters and can
be computed in linear time with respect to the number of surface faces. Since
the extracted tube centerline is accurate, we are able to decompose the tube
into rectilinear parts and torus-like parts. This is done with a new linear
time 3D torus detection algorithm, which follows the same principle of a
previous work on 2D arc circle recognition. Detailed experiments show the
versatility, accuracy and robustness of our new method.Comment: in 18th International Conference on Image Analysis and Processing,
Sep 2015, Genova, Italy. 201
- …