27 research outputs found
DA Wand: Distortion-Aware Selection using Neural Mesh Parameterization
We present a neural technique for learning to select a local sub-region
around a point which can be used for mesh parameterization. The motivation for
our framework is driven by interactive workflows used for decaling, texturing,
or painting on surfaces. Our key idea is to incorporate segmentation
probabilities as weights of a classical parameterization method, implemented as
a novel differentiable parameterization layer within a neural network
framework. We train a segmentation network to select 3D regions that are
parameterized into 2D and penalized by the resulting distortion, giving rise to
segmentations which are distortion-aware. Following training, a user can use
our system to interactively select a point on the mesh and obtain a large,
meaningful region around the selection which induces a low-distortion
parameterization. Our code and project page are currently available.Comment: Project page: https://threedle.github.io/DA-Wand/ Code:
https://github.com/threedle/DA-Wan
Neural Surface Maps
Maps are arguably one of the most fundamental concepts used to define and operate on manifold surfaces in differentiable geometry. Accordingly, in geometry processing, maps are ubiquitous and are used in many core applications, such as paramterization, shape analysis, remeshing, and deformation. Unfortunately, most computational representations of surface maps do not lend themselves to manipulation and optimization, usually entailing hard, discrete problems. While algorithms exist to solve these problems, they are problem-specific, and a general framework for surface maps is still in need. In this paper, we advocate considering neural networks as encoding surface maps. Since neural networks can be composed on one another and are differentiable, we show it is easy to use them to define surfaces via atlases, compose them for surface-to-surface mappings, and optimize differentiable objectives relating to them, such as any notion of distortion, in a trivial manner. In our experiments, we represent surfaces by generating a neural map that approximates a UV parameterization of a 3D model. Then, we compose this map with other neural maps which we optimize with respect to distortion measures. We show that our formulation enables trivial optimization of rather elusive mapping tasks, such as maps between a collection of surfaces
TextDeformer: Geometry Manipulation using Text Guidance
We present a technique for automatically producing a deformation of an input
triangle mesh, guided solely by a text prompt. Our framework is capable of
deformations that produce both large, low-frequency shape changes, and small
high-frequency details. Our framework relies on differentiable rendering to
connect geometry to powerful pre-trained image encoders, such as CLIP and DINO.
Notably, updating mesh geometry by taking gradient steps through differentiable
rendering is notoriously challenging, commonly resulting in deformed meshes
with significant artifacts. These difficulties are amplified by noisy and
inconsistent gradients from CLIP. To overcome this limitation, we opt to
represent our mesh deformation through Jacobians, which updates deformations in
a global, smooth manner (rather than locally-sub-optimal steps). Our key
observation is that Jacobians are a representation that favors smoother, large
deformations, leading to a global relation between vertices and pixels, and
avoiding localized noisy gradients. Additionally, to ensure the resulting shape
is coherent from all 3D viewpoints, we encourage the deep features computed on
the 2D encoding of the rendering to be consistent for a given vertex from all
viewpoints. We demonstrate that our method is capable of smoothly-deforming a
wide variety of source mesh and target text prompts, achieving both large
modifications to, e.g., body proportions of animals, as well as adding fine
semantic details, such as shoe laces on an army boot and fine details of a
face
Neural Semantic Surface Maps
We present an automated technique for computing a map between two genus-zero
shapes, which matches semantically corresponding regions to one another. Lack
of annotated data prohibits direct inference of 3D semantic priors; instead,
current State-of-the-art methods predominantly optimize geometric properties or
require varying amounts of manual annotation. To overcome the lack of annotated
training data, we distill semantic matches from pre-trained vision models: our
method renders the pair of 3D shapes from multiple viewpoints; the resulting
renders are then fed into an off-the-shelf image-matching method which
leverages a pretrained visual model to produce feature points. This yields
semantic correspondences, which can be projected back to the 3D shapes,
producing a raw matching that is inaccurate and inconsistent between different
viewpoints. These correspondences are refined and distilled into an
inter-surface map by a dedicated optimization scheme, which promotes
bijectivity and continuity of the output map. We illustrate that our approach
can generate semantic surface-to-surface maps, eliminating manual annotations
or any 3D training data requirement. Furthermore, it proves effective in
scenarios with high semantic complexity, where objects are non-isometrically
related, as well as in situations where they are nearly isometric
Learning Delaunay Surface Elements for Mesh Reconstruction
We present a method for reconstructing triangle meshes from point clouds.
Existing learning-based methods for mesh reconstruction mostly generate
triangles individually, making it hard to create manifold meshes. We leverage
the properties of 2D Delaunay triangulations to construct a mesh from manifold
surface elements. Our method first estimates local geodesic neighborhoods
around each point. We then perform a 2D projection of these neighborhoods using
a learned logarithmic map. A Delaunay triangulation in this 2D domain is
guaranteed to produce a manifold patch, which we call a Delaunay surface
element. We synchronize the local 2D projections of neighboring elements to
maximize the manifoldness of the reconstructed mesh. Our results show that we
achieve better overall manifoldness of our reconstructed meshes than current
methods to reconstruct meshes with arbitrary topology
Differentiable Surface Triangulation
Triangle meshes remain the most popular data representation for surface geometry. This ubiquitous representation is essentially a hybrid one that decouples continuous vertex locations from the discrete topological triangulation. Unfortunately, the combinatorial nature of the triangulation prevents taking derivatives over the space of possible meshings of any given surface. As a result, to date, mesh processing and optimization techniques have been unable to truly take advantage of modular gradient descent components of modern optimization frameworks. In this work, we present a differentiable surface triangulation that enables optimization for any per-vertex or per-face differentiable objective function over the space of underlying surface triangulations. Our method builds on the result that any 2D triangulation can be achieved by a suitably perturbed weighted Delaunay triangulation. We translate this result into a computational algorithm by proposing a soft relaxation of the classical weighted Delaunay triangulation and optimizing over vertex weights and vertex locations. We extend the algorithm to 3D by decomposing shapes into developable sets and differentiably meshing each set with suitable boundary constraints. We demonstrate the efficacy of our method on various planar and surface meshes on a range of difficult-to-optimize objective functions. Our code can be found online: https://github.com/mrakotosaon/diff-surface-triangulation
Neural Cages for Detail-Preserving 3D Deformations
We propose a novel learnable representation for detail-preserving shape
deformation. The goal of our method is to warp a source shape to match the
general structure of a target shape, while preserving the surface details of
the source. Our method extends a traditional cage-based deformation technique,
where the source shape is enclosed by a coarse control mesh termed \emph{cage},
and translations prescribed on the cage vertices are interpolated to any point
on the source mesh via special weight functions. The use of this sparse cage
scaffolding enables preserving surface details regardless of the shape's
intricacy and topology. Our key contribution is a novel neural network
architecture for predicting deformations by controlling the cage. We
incorporate a differentiable cage-based deformation module in our architecture,
and train our network end-to-end. Our method can be trained with common
collections of 3D models in an unsupervised fashion, without any cage-specific
annotations. We demonstrate the utility of our method for synthesizing shape
variations and deformation transfer.Comment: accepted for oral presentation at CVPR 2020, code available at
https://github.com/yifita/deep_cag
Neural Convolutional Surfaces
This work is concerned with a representation of shapes that disentangles
fine, local and possibly repeating geometry, from global, coarse structures.
Achieving such disentanglement leads to two unrelated advantages: i) a
significant compression in the number of parameters required to represent a
given geometry; ii) the ability to manipulate either global geometry, or local
details, without harming the other. At the core of our approach lies a novel
pipeline and neural architecture, which are optimized to represent one specific
atlas, representing one 3D surface. Our pipeline and architecture are designed
so that disentanglement of global geometry from local details is accomplished
through optimization, in a completely unsupervised manner. We show that this
approach achieves better neural shape compression than the state of the art, as
well as enabling manipulation and transfer of shape details. Project page at
http://geometry.cs.ucl.ac.uk/projects/2022/cnnmaps/