4,239 research outputs found
Robust Non-Rigid Registration with Reweighted Position and Transformation Sparsity
Non-rigid registration is challenging because it is ill-posed with high
degrees of freedom and is thus sensitive to noise and outliers. We propose a
robust non-rigid registration method using reweighted sparsities on position
and transformation to estimate the deformations between 3-D shapes. We
formulate the energy function with position and transformation sparsity on both
the data term and the smoothness term, and define the smoothness constraint
using local rigidity. The double sparsity based non-rigid registration model is
enhanced with a reweighting scheme, and solved by transferring the model into
four alternately-optimized subproblems which have exact solutions and
guaranteed convergence. Experimental results on both public datasets and real
scanned datasets show that our method outperforms the state-of-the-art methods
and is more robust to noise and outliers than conventional non-rigid
registration methods.Comment: IEEE Transactions on Visualization and Computer Graphic
Supersymmetry and Goldstino-like Mode in Bose-Fermi Mixtures
Supersymmetry is assumed to be a basic symmetry of the world in many high
energy theories, but none of the super partners of any known elementary
particle has been observed yet. We argue that supersymmetry can also be
realized and studied in ultracold atomic systems with a mixture of bosons and
fermions, with properly tuned interactions and single particle dispersion. We
further show that in such non-releativistic systems supersymmetry is either
spontaneously broken, or explicitly broken by a chemical potential difference
between the bosons and fermions. In both cases the system supports a sharp
fermionic collective mode or the so-called Goldstino, due to supersymmetry. We
also discuss possible ways to detect the Goldstino mode experimentally.Comment: 4 pages. V4: published versio
Mesh-based Autoencoders for Localized Deformation Component Analysis
Spatially localized deformation components are very useful for shape analysis
and synthesis in 3D geometry processing. Several methods have recently been
developed, with an aim to extract intuitive and interpretable deformation
components. However, these techniques suffer from fundamental limitations
especially for meshes with noise or large-scale deformations, and may not
always be able to identify important deformation components. In this paper we
propose a novel mesh-based autoencoder architecture that is able to cope with
meshes with irregular topology. We introduce sparse regularization in this
framework, which along with convolutional operations, helps localize
deformations. Our framework is capable of extracting localized deformation
components from mesh data sets with large-scale deformations and is robust to
noise. It also provides a nonlinear approach to reconstruction of meshes using
the extracted basis, which is more effective than the current linear
combination approach. Extensive experiments show that our method outperforms
state-of-the-art methods in both qualitative and quantitative evaluations
High-Quality Animatable Dynamic Garment Reconstruction from Monocular Videos
Much progress has been made in reconstructing garments from an image or a
video. However, none of existing works meet the expectations of digitizing
high-quality animatable dynamic garments that can be adjusted to various unseen
poses. In this paper, we propose the first method to recover high-quality
animatable dynamic garments from monocular videos without depending on scanned
data. To generate reasonable deformations for various unseen poses, we propose
a learnable garment deformation network that formulates the garment
reconstruction task as a pose-driven deformation problem. To alleviate the
ambiguity estimating 3D garments from monocular videos, we design a
multi-hypothesis deformation module that learns spatial representations of
multiple plausible deformations. Experimental results on several public
datasets demonstrate that our method can reconstruct high-quality dynamic
garments with coherent surface details, which can be easily animated under
unseen poses. The code will be provided for research purposes
FOF: Learning Fourier Occupancy Field for Monocular Real-time Human Reconstruction
The advent of deep learning has led to significant progress in monocular
human reconstruction. However, existing representations, such as parametric
models, voxel grids, meshes and implicit neural representations, have
difficulties achieving high-quality results and real-time speed at the same
time. In this paper, we propose Fourier Occupancy Field (FOF), a novel
powerful, efficient and flexible 3D representation, for monocular real-time and
accurate human reconstruction. The FOF represents a 3D object with a 2D field
orthogonal to the view direction where at each 2D position the occupancy field
of the object along the view direction is compactly represented with the first
few terms of Fourier series, which retains the topology and neighborhood
relation in the 2D domain. A FOF can be stored as a multi-channel image, which
is compatible with 2D convolutional neural networks and can bridge the gap
between 3D geometries and 2D images. The FOF is very flexible and extensible,
e.g., parametric models can be easily integrated into a FOF as a prior to
generate more robust results. Based on FOF, we design the first 30+FPS
high-fidelity real-time monocular human reconstruction framework. We
demonstrate the potential of FOF on both public dataset and real captured data.
The code will be released for research purposes
- …