3,959 research outputs found
Neural ShDF: Reviving an Efficient and Consistent Mesh Segmentation Method
Partitioning a polygonal mesh into meaningful parts can be challenging. Many
applications require decomposing such structures for further processing in
computer graphics. In the last decade, several methods were proposed to tackle
this problem, at the cost of intensive computational times. Recently, machine
learning has proven to be effective for the segmentation task on 3D structures.
Nevertheless, these state-of-the-art methods are often hardly generalizable and
require dividing the learned model into several specific classes of objects to
avoid overfitting. We present a data-driven approach leveraging deep learning
to encode a mapping function prior to mesh segmentation for multiple
applications. Our network reproduces a neighborhood map using our knowledge of
the \textsl{Shape Diameter Function} (SDF) method using similarities among
vertex neighborhoods. Our approach is resolution-agnostic as we downsample the
input meshes and query the full-resolution structure solely for neighborhood
contributions. Using our predicted SDF values, we can inject the resulting
structure into a graph-cut algorithm to generate an efficient and robust mesh
segmentation while considerably reducing the required computation times.Comment: 9 pages, 13 figures, and 3 tables. Short paper and poster published
and presented at SIGGRAPH 202
Learning Controllable 3D Diffusion Models from Single-view Images
Diffusion models have recently become the de-facto approach for generative
modeling in the 2D domain. However, extending diffusion models to 3D is
challenging due to the difficulties in acquiring 3D ground truth data for
training. On the other hand, 3D GANs that integrate implicit 3D representations
into GANs have shown remarkable 3D-aware generation when trained only on
single-view image datasets. However, 3D GANs do not provide straightforward
ways to precisely control image synthesis. To address these challenges, We
present Control3Diff, a 3D diffusion model that combines the strengths of
diffusion models and 3D GANs for versatile, controllable 3D-aware image
synthesis for single-view datasets. Control3Diff explicitly models the
underlying latent distribution (optionally conditioned on external inputs),
thus enabling direct control during the diffusion process. Moreover, our
approach is general and applicable to any type of controlling input, allowing
us to train it with the same diffusion objective without any auxiliary
supervision. We validate the efficacy of Control3Diff on standard image
generation benchmarks, including FFHQ, AFHQ, and ShapeNet, using various
conditioning inputs such as images, sketches, and text prompts. Please see the
project website (\url{https://jiataogu.me/control3diff}) for video comparisons.Comment: work in progres
Adversarial Curriculum Graph Contrastive Learning with Pair-wise Augmentation
Graph contrastive learning (GCL) has emerged as a pivotal technique in the
domain of graph representation learning. A crucial aspect of effective GCL is
the caliber of generated positive and negative samples, which is intrinsically
dictated by their resemblance to the original data. Nevertheless, precise
control over similarity during sample generation presents a formidable
challenge, often impeding the effective discovery of representative graph
patterns. To address this challenge, we propose an innovative framework:
Adversarial Curriculum Graph Contrastive Learning (ACGCL), which capitalizes on
the merits of pair-wise augmentation to engender graph-level positive and
negative samples with controllable similarity, alongside subgraph contrastive
learning to discern effective graph patterns therein. Within the ACGCL
framework, we have devised a novel adversarial curriculum training methodology
that facilitates progressive learning by sequentially increasing the difficulty
of distinguishing the generated samples. Notably, this approach transcends the
prevalent sparsity issue inherent in conventional curriculum learning
strategies by adaptively concentrating on more challenging training data.
Finally, a comprehensive assessment of ACGCL is conducted through extensive
experiments on six well-known benchmark datasets, wherein ACGCL conspicuously
surpasses a set of state-of-the-art baselines
Drawing Clustered Graphs as Topographic Maps
The visualization of clustered graphs is an essential tool for the analysis of networks, in particular, social networks, in which clustering techniques like community detection can reveal various structural properties. In this paper, we show how clustered graphs can be drawn as topographic maps, a type of map easily understandable by users not familiar with information visu- alization. Elevation levels of connected entities correspond to the nested structure of the cluster hierarchy. We present methods for initial node placement and describe a tree mapping based algorithm that produces an area efficient layout. Given this layout, a triangular ir- regular mesh is generated that is used to extract the elevation data for rendering the map. In addition, the mesh enables the routing of edges based on the topo- graphic features of the map
Late-Constraint Diffusion Guidance for Controllable Image Synthesis
Diffusion models, either with or without text condition, have demonstrated
impressive capability in synthesizing photorealistic images given a few or even
no words. These models may not fully satisfy user need, as normal users or
artists intend to control the synthesized images with specific guidance, like
overall layout, color, structure, object shape, and so on. To adapt diffusion
models for controllable image synthesis, several methods have been proposed to
incorporate the required conditions as regularization upon the intermediate
features of the diffusion denoising network. These methods, known as
early-constraint ones in this paper, have difficulties in handling multiple
conditions with a single solution. They intend to train separate models for
each specific condition, which require much training cost and result in
non-generalizable solutions. To address these difficulties, we propose a new
approach namely late-constraint: we leave the diffusion networks unchanged, but
constrain its output to be aligned with the required conditions. Specifically,
we train a lightweight condition adapter to establish the correlation between
external conditions and internal representations of diffusion models. During
the iterative denoising process, the conditional guidance is sent into
corresponding condition adapter to manipulate the sampling process with the
established correlation. We further equip the introduced late-constraint
strategy with a timestep resampling method and an early stopping technique,
which boost the quality of synthesized image meanwhile complying with the
guidance. Our method outperforms the existing early-constraint methods and
generalizes better to unseen condition
- …