8,566 research outputs found
Learning quadrangulated patches for 3D shape parameterization and completion
We propose a novel 3D shape parameterization by surface patches, that are
oriented by 3D mesh quadrangulation of the shape. By encoding 3D surface detail
on local patches, we learn a patch dictionary that identifies principal surface
features of the shape. Unlike previous methods, we are able to encode surface
patches of variable size as determined by the user. We propose novel methods
for dictionary learning and patch reconstruction based on the query of a noisy
input patch with holes. We evaluate the patch dictionary towards various
applications in 3D shape inpainting, denoising and compression. Our method is
able to predict missing vertices and inpaint moderately sized holes. We
demonstrate a complete pipeline for reconstructing the 3D mesh from the patch
encoding. We validate our shape parameterization and reconstruction methods on
both synthetic shapes and real world scans. We show that our patch dictionary
performs successful shape completion of complicated surface textures.Comment: To be presented at International Conference on 3D Vision 2017, 201
An Adaptive Dictionary Learning Approach for Modeling Dynamical Textures
Video representation is an important and challenging task in the computer
vision community. In this paper, we assume that image frames of a moving scene
can be modeled as a Linear Dynamical System. We propose a sparse coding
framework, named adaptive video dictionary learning (AVDL), to model a video
adaptively. The developed framework is able to capture the dynamics of a moving
scene by exploring both sparse properties and the temporal correlations of
consecutive video frames. The proposed method is compared with state of the art
video processing methods on several benchmark data sequences, which exhibit
appearance changes and heavy occlusions
Geometry-Aware Network for Non-Rigid Shape Prediction from a Single View
We propose a method for predicting the 3D shape of a deformable surface from
a single view. By contrast with previous approaches, we do not need a
pre-registered template of the surface, and our method is robust to the lack of
texture and partial occlusions. At the core of our approach is a {\it
geometry-aware} deep architecture that tackles the problem as usually done in
analytic solutions: first perform 2D detection of the mesh and then estimate a
3D shape that is geometrically consistent with the image. We train this
architecture in an end-to-end manner using a large dataset of synthetic
renderings of shapes under different levels of deformation, material
properties, textures and lighting conditions. We evaluate our approach on a
test split of this dataset and available real benchmarks, consistently
improving state-of-the-art solutions with a significantly lower computational
time.Comment: Accepted at CVPR 201
- …