1,371 research outputs found
Neural Feature Fusion Fields: 3D Distillation of Self-Supervised 2D Image Representations
We present Neural Feature Fusion Fields (N3F), a method that improves dense
2D image feature extractors when the latter are applied to the analysis of
multiple images reconstructible as a 3D scene. Given an image feature
extractor, for example pre-trained using self-supervision, N3F uses it as a
teacher to learn a student network defined in 3D space. The 3D student network
is similar to a neural radiance field that distills said features and can be
trained with the usual differentiable rendering machinery. As a consequence,
N3F is readily applicable to most neural rendering formulations, including
vanilla NeRF and its extensions to complex dynamic scenes. We show that our
method not only enables semantic understanding in the context of scene-specific
neural fields without the use of manual labels, but also consistently improves
over the self-supervised 2D baselines. This is demonstrated by considering
various tasks, such as 2D object retrieval, 3D segmentation, and scene editing,
in diverse sequences, including long egocentric videos in the EPIC-KITCHENS
benchmark.Comment: 3DV2022, Oral. Project page: https://www.robots.ox.ac.uk/~vadim/n3f
Self-supervised Hypergraphs for Learning Multiple World Interpretations
We present a method for learning multiple scene representations given a small
labeled set, by exploiting the relationships between such representations in
the form of a multi-task hypergraph. We also show how we can use the hypergraph
to improve a powerful pretrained VisTransformer model without any additional
labeled data. In our hypergraph, each node is an interpretation layer (e.g.,
depth or segmentation) of the scene. Within each hyperedge, one or several
input nodes predict the layer at the output node. Thus, each node could be an
input node in some hyperedges and an output node in others. In this way,
multiple paths can reach the same node, to form ensembles from which we obtain
robust pseudolabels, which allow self-supervised learning in the hypergraph. We
test different ensemble models and different types of hyperedges and show
superior performance to other multi-task graph models in the field. We also
introduce Dronescapes, a large video dataset captured with UAVs in different
complex real-world scenes, with multiple representations, suitable for
multi-task learning.Comment: Accepted in ICCV 2023 Workshop
CLOP: Video-and-Language Pre-Training with Knowledge Regularizations
Video-and-language pre-training has shown promising results for learning
generalizable representations. Most existing approaches usually model video and
text in an implicit manner, without considering explicit structural
representations of the multi-modal content. We denote such form of
representations as structural knowledge, which express rich semantics of
multiple granularities. There are related works that propose object-aware
approaches to inject similar knowledge as inputs. However, the existing methods
usually fail to effectively utilize such knowledge as regularizations to shape
a superior cross-modal representation space. To this end, we propose a
Cross-modaL knOwledge-enhanced Pre-training (CLOP) method with Knowledge
Regularizations. There are two key designs of ours: 1) a simple yet effective
Structural Knowledge Prediction (SKP) task to pull together the latent
representations of similar videos; and 2) a novel Knowledge-guided sampling
approach for Contrastive Learning (KCL) to push apart cross-modal hard negative
samples. We evaluate our method on four text-video retrieval tasks and one
multi-choice QA task. The experiments show clear improvements, outperforming
prior works by a substantial margin. Besides, we provide ablations and insights
of how our methods affect the latent representation space, demonstrating the
value of incorporating knowledge regularizations into video-and-language
pre-training.Comment: ACM Multimedia 2022 (MM'22
Distilled Feature Fields Enable Few-Shot Language-Guided Manipulation
Self-supervised and language-supervised image models contain rich knowledge
of the world that is important for generalization. Many robotic tasks, however,
require a detailed understanding of 3D geometry, which is often lacking in 2D
image features. This work bridges this 2D-to-3D gap for robotic manipulation by
leveraging distilled feature fields to combine accurate 3D geometry with rich
semantics from 2D foundation models. We present a few-shot learning method for
6-DOF grasping and placing that harnesses these strong spatial and semantic
priors to achieve in-the-wild generalization to unseen objects. Using features
distilled from a vision-language model, CLIP, we present a way to designate
novel objects for manipulation via free-text natural language, and demonstrate
its ability to generalize to unseen expressions and novel categories of
objects.Comment: Project website at https://f3rm.csail.mit.ed
PanopticNeRF-360: Panoramic 3D-to-2D Label Transfer in Urban Scenes
Training perception systems for self-driving cars requires substantial
annotations. However, manual labeling in 2D images is highly labor-intensive.
While existing datasets provide rich annotations for pre-recorded sequences,
they fall short in labeling rarely encountered viewpoints, potentially
hampering the generalization ability for perception models. In this paper, we
present PanopticNeRF-360, a novel approach that combines coarse 3D annotations
with noisy 2D semantic cues to generate consistent panoptic labels and
high-quality images from any viewpoint. Our key insight lies in exploiting the
complementarity of 3D and 2D priors to mutually enhance geometry and semantics.
Specifically, we propose to leverage noisy semantic and instance labels in both
3D and 2D spaces to guide geometry optimization. Simultaneously, the improved
geometry assists in filtering noise present in the 3D and 2D annotations by
merging them in 3D space via a learned semantic field. To further enhance
appearance, we combine MLP and hash grids to yield hybrid scene features,
striking a balance between high-frequency appearance and predominantly
contiguous semantics. Our experiments demonstrate PanopticNeRF-360's
state-of-the-art performance over existing label transfer methods on the
challenging urban scenes of the KITTI-360 dataset. Moreover, PanopticNeRF-360
enables omnidirectional rendering of high-fidelity, multi-view and
spatiotemporally consistent appearance, semantic and instance labels. We make
our code and data available at https://github.com/fuxiao0719/PanopticNeRFComment: Project page: http://fuxiao0719.github.io/projects/panopticnerf360/.
arXiv admin note: text overlap with arXiv:2203.1522
Collaborative Score Distillation for Consistent Visual Synthesis
Generative priors of large-scale text-to-image diffusion models enable a wide
range of new generation and editing applications on diverse visual modalities.
However, when adapting these priors to complex visual modalities, often
represented as multiple images (e.g., video), achieving consistency across a
set of images is challenging. In this paper, we address this challenge with a
novel method, Collaborative Score Distillation (CSD). CSD is based on the Stein
Variational Gradient Descent (SVGD). Specifically, we propose to consider
multiple samples as "particles" in the SVGD update and combine their score
functions to distill generative priors over a set of images synchronously.
Thus, CSD facilitates seamless integration of information across 2D images,
leading to a consistent visual synthesis across multiple samples. We show the
effectiveness of CSD in a variety of tasks, encompassing the visual editing of
panorama images, videos, and 3D scenes. Our results underline the competency of
CSD as a versatile method for enhancing inter-sample consistency, thereby
broadening the applicability of text-to-image diffusion models.Comment: Project page with visuals: https://subin-kim-cv.github.io/CSD
- …