3,629 research outputs found
A critical analysis of self-supervision, or what we can learn from a single image
We look critically at popular self-supervision techniques for learning deep
convolutional neural networks without manual labels. We show that three
different and representative methods, BiGAN, RotNet and DeepCluster, can learn
the first few layers of a convolutional network from a single image as well as
using millions of images and manual labels, provided that strong data
augmentation is used. However, for deeper layers the gap with manual
supervision cannot be closed even if millions of unlabelled images are used for
training. We conclude that: (1) the weights of the early layers of deep
networks contain limited information about the statistics of natural images,
that (2) such low-level statistics can be learned through self-supervision just
as well as through strong supervision, and that (3) the low-level statistics
can be captured via synthetic transformations instead of using a large image
dataset.Comment: Accepted paper at the International Conference on Learning
Representations (ICLR) 202
Evaluating Digital Libraries: A Longitudinal and Multifaceted View
published or submitted for publicatio
Implicit 3D Orientation Learning for 6D Object Detection from RGB Images
We propose a real-time RGB-based pipeline for object detection and 6D pose
estimation. Our novel 3D orientation estimation is based on a variant of the
Denoising Autoencoder that is trained on simulated views of a 3D model using
Domain Randomization. This so-called Augmented Autoencoder has several
advantages over existing methods: It does not require real, pose-annotated
training data, generalizes to various test sensors and inherently handles
object and view symmetries. Instead of learning an explicit mapping from input
images to object poses, it provides an implicit representation of object
orientations defined by samples in a latent space. Our pipeline achieves
state-of-the-art performance on the T-LESS dataset both in the RGB and RGB-D
domain. We also evaluate on the LineMOD dataset where we can compete with other
synthetically trained approaches. We further increase performance by correcting
3D orientation estimates to account for perspective errors when the object
deviates from the image center and show extended results.Comment: Code available at: https://github.com/DLR-RM/AugmentedAutoencode
Diversify Your Vision Datasets with Automatic Diffusion-Based Augmentation
Many fine-grained classification tasks, like rare animal identification, have
limited training data and consequently classifiers trained on these datasets
often fail to generalize to variations in the domain like changes in weather or
location. As such, we explore how natural language descriptions of the domains
seen in training data can be used with large vision models trained on diverse
pretraining datasets to generate useful variations of the training data. We
introduce ALIA (Automated Language-guided Image Augmentation), a method which
utilizes large vision and language models to automatically generate natural
language descriptions of a dataset's domains and augment the training data via
language-guided image editing. To maintain data integrity, a model trained on
the original dataset filters out minimal image edits and those which corrupt
class-relevant information. The resulting dataset is visually consistent with
the original training data and offers significantly enhanced diversity. We show
that ALIA is able to surpasses traditional data augmentation and text-to-image
generated data on fine-grained classification tasks, including cases of domain
generalization and contextual bias. Code is available at
https://github.com/lisadunlap/ALIA.Comment: Update: replaced Planes dataset with Waterbirds & updated results
after bug fi
- …