4,661 research outputs found
OperatorNet: Recovering 3D Shapes From Difference Operators
This paper proposes a learning-based framework for reconstructing 3D shapes
from functional operators, compactly encoded as small-sized matrices. To this
end we introduce a novel neural architecture, called OperatorNet, which takes
as input a set of linear operators representing a shape and produces its 3D
embedding. We demonstrate that this approach significantly outperforms previous
purely geometric methods for the same problem. Furthermore, we introduce a
novel functional operator, which encodes the extrinsic or pose-dependent shape
information, and thus complements purely intrinsic pose-oblivious operators,
such as the classical Laplacian. Coupled with this novel operator, our
reconstruction network achieves very high reconstruction accuracy, even in the
presence of incomplete information about a shape, given a soft or functional
map expressed in a reduced basis. Finally, we demonstrate that the
multiplicative functional algebra enjoyed by these operators can be used to
synthesize entirely new unseen shapes, in the context of shape interpolation
and shape analogy applications.Comment: Accepted to ICCV 201
Mesh-based Autoencoders for Localized Deformation Component Analysis
Spatially localized deformation components are very useful for shape analysis
and synthesis in 3D geometry processing. Several methods have recently been
developed, with an aim to extract intuitive and interpretable deformation
components. However, these techniques suffer from fundamental limitations
especially for meshes with noise or large-scale deformations, and may not
always be able to identify important deformation components. In this paper we
propose a novel mesh-based autoencoder architecture that is able to cope with
meshes with irregular topology. We introduce sparse regularization in this
framework, which along with convolutional operations, helps localize
deformations. Our framework is capable of extracting localized deformation
components from mesh data sets with large-scale deformations and is robust to
noise. It also provides a nonlinear approach to reconstruction of meshes using
the extracted basis, which is more effective than the current linear
combination approach. Extensive experiments show that our method outperforms
state-of-the-art methods in both qualitative and quantitative evaluations
Lifting from the Deep: Convolutional 3D Pose Estimation from a Single Image
We propose a unified formulation for the problem of 3D human pose estimation
from a single raw RGB image that reasons jointly about 2D joint estimation and
3D pose reconstruction to improve both tasks. We take an integrated approach
that fuses probabilistic knowledge of 3D human pose with a multi-stage CNN
architecture and uses the knowledge of plausible 3D landmark locations to
refine the search for better 2D locations. The entire process is trained
end-to-end, is extremely efficient and obtains state- of-the-art results on
Human3.6M outperforming previous approaches both on 2D and 3D errors.Comment: Paper presented at CVPR 1
Data based identification and prediction of nonlinear and complex dynamical systems
We thank Dr. R. Yang (formerly at ASU), Dr. R.-Q. Su (formerly at ASU), and Mr. Zhesi Shen for their contributions to a number of original papers on which this Review is partly based. This work was supported by ARO under Grant No. W911NF-14-1-0504. W.-X. Wang was also supported by NSFC under Grants No. 61573064 and No. 61074116, as well as by the Fundamental Research Funds for the Central Universities, Beijing Nova Programme.Peer reviewedPostprin
Deformable Shape Completion with Graph Convolutional Autoencoders
The availability of affordable and portable depth sensors has made scanning
objects and people simpler than ever. However, dealing with occlusions and
missing parts is still a significant challenge. The problem of reconstructing a
(possibly non-rigidly moving) 3D object from a single or multiple partial scans
has received increasing attention in recent years. In this work, we propose a
novel learning-based method for the completion of partial shapes. Unlike the
majority of existing approaches, our method focuses on objects that can undergo
non-rigid deformations. The core of our method is a variational autoencoder
with graph convolutional operations that learns a latent space for complete
realistic shapes. At inference, we optimize to find the representation in this
latent space that best fits the generated shape to the known partial input. The
completed shape exhibits a realistic appearance on the unknown part. We show
promising results towards the completion of synthetic and real scans of human
body and face meshes exhibiting different styles of articulation and
partiality.Comment: CVPR 201
Reconstructing dynamical networks via feature ranking
Empirical data on real complex systems are becoming increasingly available.
Parallel to this is the need for new methods of reconstructing (inferring) the
topology of networks from time-resolved observations of their node-dynamics.
The methods based on physical insights often rely on strong assumptions about
the properties and dynamics of the scrutinized network. Here, we use the
insights from machine learning to design a new method of network reconstruction
that essentially makes no such assumptions. Specifically, we interpret the
available trajectories (data) as features, and use two independent feature
ranking approaches -- Random forest and RReliefF -- to rank the importance of
each node for predicting the value of each other node, which yields the
reconstructed adjacency matrix. We show that our method is fairly robust to
coupling strength, system size, trajectory length and noise. We also find that
the reconstruction quality strongly depends on the dynamical regime
- …