8 research outputs found
Mesh-based Autoencoders for Localized Deformation Component Analysis
Spatially localized deformation components are very useful for shape analysis
and synthesis in 3D geometry processing. Several methods have recently been
developed, with an aim to extract intuitive and interpretable deformation
components. However, these techniques suffer from fundamental limitations
especially for meshes with noise or large-scale deformations, and may not
always be able to identify important deformation components. In this paper we
propose a novel mesh-based autoencoder architecture that is able to cope with
meshes with irregular topology. We introduce sparse regularization in this
framework, which along with convolutional operations, helps localize
deformations. Our framework is capable of extracting localized deformation
components from mesh data sets with large-scale deformations and is robust to
noise. It also provides a nonlinear approach to reconstruction of meshes using
the extracted basis, which is more effective than the current linear
combination approach. Extensive experiments show that our method outperforms
state-of-the-art methods in both qualitative and quantitative evaluations
Mesh-based autoencoders for localized deformation component analysis
Spatially localized deformation components are very useful
for shape analysis and synthesis in 3D geometry processing.
Several methods have recently been developed, with an aim to
extract intuitive and interpretable deformation components.
However, these techniques suffer from fundamental limitations
especially for meshes with noise or large-scale deformations,
and may not always be able to identify important
deformation components. In this paper we propose a novel
mesh-based autoencoder architecture that is able to cope with
meshes with irregular topology. We introduce sparse regularization
in this framework, which along with convolutional operations,
helps localize deformations. Our framework is capable
of extracting localized deformation components from
mesh data sets with large-scale deformations and is robust to
noise. It also provides a nonlinear approach to reconstruction
of meshes using the extracted basis, which is more effective
than the current linear combination approach. Extensive experiments
show that our method outperforms state-of-the-art
methods in both qualitative and quantitative evaluations
Multiscale Mesh Deformation Component Analysis with Attention-based Autoencoders
Deformation component analysis is a fundamental problem in geometry
processing and shape understanding. Existing approaches mainly extract
deformation components in local regions at a similar scale while deformations
of real-world objects are usually distributed in a multi-scale manner. In this
paper, we propose a novel method to exact multiscale deformation components
automatically with a stacked attention-based autoencoder. The attention
mechanism is designed to learn to softly weight multi-scale deformation
components in active deformation regions, and the stacked attention-based
autoencoder is learned to represent the deformation components at different
scales. Quantitative and qualitative evaluations show that our method
outperforms state-of-the-art methods. Furthermore, with the multiscale
deformation components extracted by our method, the user can edit shapes in a
coarse-to-fine fashion which facilitates effective modeling of new shapes.Comment: 15 page
Mesh-based variational autoencoders for localized deformation component analysis
Spatially localized deformation components are very useful for shape analysis and synthesis in 3D geometry processing. Several methods have recently been developed, with an aim to extract intuitive and interpretable deformation components. However, these techniques suffer from fundamental limitations especially for meshes with noise or large-scale nonlinear deformations, and may not always be able to identify important deformation components. In this paper we propose a novel mesh-based variational autoencoder architecture that is able to cope with meshes with irregular connectivity and nonlinear deformations. To help localize deformations, we introduce sparse regularization along with spectral graph convolutional operations. Through modifying the regularization formulation and allowing dynamic change of sparsity ranges, we improve the visual quality and reconstruction ability. Our system also provides a nonlinear approach to reconstruction of meshes using the extracted basis, which is more effective than the current linear combination approach. We further develop a neural shape editing method, achieving shape editing and deformation component extraction in a unified framework and ensuring plausibility of the edited shapes. Extensive experiments show that our method outperforms state-of-the-art methods in both qualitative and quantitative evaluations. We also demonstrate the effectiveness of our method for neural shape editing
Sparse data driven mesh deformation
Example-based mesh deformation methods are powerful tools for realistic shape editing. However, existing techniques typically combine all the example deformation modes, which can lead to overfitting, i.e. using an overly complicated model to explain the user-specified deformation. This leads to implausible or unstable deformation results, including unexpected global changes outside the region of interest. To address this fundamental limitation, we propose a sparse blending method that automatically selects a smaller number of deformation modes to compactly describe the desired deformation. This along with a suitably chosen deformation basis including spatially localized deformation modes leads to significant advantages, including more meaningful, reliable, and efficient deformations because fewer and localized deformation modes are applied. To cope with large rotations, we develop a simple but effective representation based on polar decomposition of deformation gradients, which resolves the ambiguity of large global rotations using an as-consistent-as-possible global optimization. This simple representation has a closed form solution for derivatives, making it efficient for our sparse localized representation and thus ensuring interactive performance. Experimental results show that our method outperforms state-of-the-art data-driven mesh deformation methods, for both quality of results and efficiency
A Revisit of Shape Editing Techniques: from the Geometric to the Neural Viewpoint
3D shape editing is widely used in a range of applications such as movie
production, computer games and computer aided design. It is also a popular
research topic in computer graphics and computer vision. In past decades,
researchers have developed a series of editing methods to make the editing
process faster, more robust, and more reliable. Traditionally, the deformed
shape is determined by the optimal transformation and weights for an energy
term. With increasing availability of 3D shapes on the Internet, data-driven
methods were proposed to improve the editing results. More recently as the deep
neural networks became popular, many deep learning based editing methods have
been developed in this field, which is naturally data-driven. We mainly survey
recent research works from the geometric viewpoint to those emerging neural
deformation techniques and categorize them into organic shape editing methods
and man-made model editing methods. Both traditional methods and recent neural
network based methods are reviewed