98 research outputs found
Shape Generation using Spatially Partitioned Point Clouds
We propose a method to generate 3D shapes using point clouds. Given a
point-cloud representation of a 3D shape, our method builds a kd-tree to
spatially partition the points. This orders them consistently across all
shapes, resulting in reasonably good correspondences across all shapes. We then
use PCA analysis to derive a linear shape basis across the spatially
partitioned points, and optimize the point ordering by iteratively minimizing
the PCA reconstruction error. Even with the spatial sorting, the point clouds
are inherently noisy and the resulting distribution over the shape coefficients
can be highly multi-modal. We propose to use the expressive power of neural
networks to learn a distribution over the shape coefficients in a
generative-adversarial framework. Compared to 3D shape generative models
trained on voxel-representations, our point-based method is considerably more
light-weight and scalable, with little loss of quality. It also outperforms
simpler linear factor models such as Probabilistic PCA, both qualitatively and
quantitatively, on a number of categories from the ShapeNet dataset.
Furthermore, our method can easily incorporate other point attributes such as
normal and color information, an additional advantage over voxel-based
representations.Comment: To appear at BMVC 201
On Sampling from the Gibbs Distribution with Random Maximum A-Posteriori Perturbations
In this paper we describe how MAP inference can be used to sample efficiently
from Gibbs distributions. Specifically, we provide means for drawing either
approximate or unbiased samples from Gibbs' distributions by introducing low
dimensional perturbations and solving the corresponding MAP assignments. Our
approach also leads to new ways to derive lower bounds on partition functions.
We demonstrate empirically that our method excels in the typical "high signal -
high coupling" regime. The setting results in ragged energy landscapes that are
challenging for alternative approaches to sampling and/or lower bounds
PARTICLE: Part Discovery and Contrastive Learning for Fine-grained Recognition
We develop techniques for refining representations for fine-grained
classification and segmentation tasks in a self-supervised manner. We find that
fine-tuning methods based on instance-discriminative contrastive learning are
not as effective, and posit that recognizing part-specific variations is
crucial for fine-grained categorization. We present an iterative learning
approach that incorporates part-centric equivariance and invariance objectives.
First, pixel representations are clustered to discover parts. We analyze the
representations from convolutional and vision transformer networks that are
best suited for this task. Then, a part-centric learning step aggregates and
contrasts representations of parts within an image. We show that this improves
the performance on image classification and part segmentation tasks across
datasets. For example, under a linear-evaluation scheme, the classification
accuracy of a ResNet50 trained on ImageNet using DetCon, a self-supervised
learning approach, improves from 35.4% to 42.0% on the Caltech-UCSD Birds, from
35.5% to 44.1% on the FGVC Aircraft, and from 29.7% to 37.4% on the Stanford
Cars. We also observe significant gains in few-shot part segmentation tasks
using the proposed technique, while instance-discriminative learning was not as
effective. Smaller, yet consistent, improvements are also observed for stronger
networks based on transformers
- …