186,459 research outputs found
CHORUS: Learning Canonicalized 3D Human-Object Spatial Relations from Unbounded Synthesized Images
We present a method for teaching machines to understand and model the
underlying spatial common sense of diverse human-object interactions in 3D in a
self-supervised way. This is a challenging task, as there exist specific
manifolds of the interactions that can be considered human-like and natural,
but the human pose and the geometry of objects can vary even for similar
interactions. Such diversity makes the annotating task of 3D interactions
difficult and hard to scale, which limits the potential to reason about that in
a supervised way. One way of learning the 3D spatial relationship between
humans and objects during interaction is by showing multiple 2D images captured
from different viewpoints when humans interact with the same type of objects.
The core idea of our method is to leverage a generative model that produces
high-quality 2D images from an arbitrary text prompt input as an "unbounded"
data generator with effective controllability and view diversity. Despite its
imperfection of the image quality over real images, we demonstrate that the
synthesized images are sufficient to learn the 3D human-object spatial
relations. We present multiple strategies to leverage the synthesized images,
including (1) the first method to leverage a generative image model for 3D
human-object spatial relation learning; (2) a framework to reason about the 3D
spatial relations from inconsistent 2D cues in a self-supervised manner via 3D
occupancy reasoning with pose canonicalization; (3) semantic clustering to
disambiguate different types of interactions with the same object types; and
(4) a novel metric to assess the quality of 3D spatial learning of interaction.Comment: Accepted to ICCV 2023 (Oral Presentation). Project Page:
https://jellyheadandrew.github.io/projects/choru
Developmental Stages of Perception and Language Acquisition in a Perceptually Grounded Robot
The objective of this research is to develop a system for language learning based on a minimum of pre-wired language-specific functionality, that is compatible with observations of perceptual and language capabilities in the human developmental trajectory. In the proposed system, meaning (in terms of descriptions of events and spatial relations) is extracted from video images based on detection of position, motion, physical contact and their parameters. Mapping of sentence form to meaning is performed by learning grammatical constructions that are retrieved from a construction inventory based on the constellation of closed class items uniquely identifying the target sentence structure. The resulting system displays robust acquisition behavior that reproduces certain observations from developmental studies, with very modest “innate” language specificity
Unsupervised Learning of Long-Term Motion Dynamics for Videos
We present an unsupervised representation learning approach that compactly
encodes the motion dependencies in videos. Given a pair of images from a video
clip, our framework learns to predict the long-term 3D motions. To reduce the
complexity of the learning framework, we propose to describe the motion as a
sequence of atomic 3D flows computed with RGB-D modality. We use a Recurrent
Neural Network based Encoder-Decoder framework to predict these sequences of
flows. We argue that in order for the decoder to reconstruct these sequences,
the encoder must learn a robust video representation that captures long-term
motion dependencies and spatial-temporal relations. We demonstrate the
effectiveness of our learned temporal representations on activity
classification across multiple modalities and datasets such as NTU RGB+D and
MSR Daily Activity 3D. Our framework is generic to any input modality, i.e.,
RGB, Depth, and RGB-D videos.Comment: CVPR 201
Weakly-supervised learning of visual relations
This paper introduces a novel approach for modeling visual relations between
pairs of objects. We call relation a triplet of the form (subject, predicate,
object) where the predicate is typically a preposition (eg. 'under', 'in front
of') or a verb ('hold', 'ride') that links a pair of objects (subject, object).
Learning such relations is challenging as the objects have different spatial
configurations and appearances depending on the relation in which they occur.
Another major challenge comes from the difficulty to get annotations,
especially at box-level, for all possible triplets, which makes both learning
and evaluation difficult. The contributions of this paper are threefold. First,
we design strong yet flexible visual features that encode the appearance and
spatial configuration for pairs of objects. Second, we propose a
weakly-supervised discriminative clustering model to learn relations from
image-level labels only. Third we introduce a new challenging dataset of
unusual relations (UnRel) together with an exhaustive annotation, that enables
accurate evaluation of visual relation retrieval. We show experimentally that
our model results in state-of-the-art results on the visual relationship
dataset significantly improving performance on previously unseen relations
(zero-shot learning), and confirm this observation on our newly introduced
UnRel dataset
Weakly-supervised learning of visual relations
This paper introduces a novel approach for modeling visual relations between
pairs of objects. We call relation a triplet of the form (subject, predicate,
object) where the predicate is typically a preposition (eg. 'under', 'in front
of') or a verb ('hold', 'ride') that links a pair of objects (subject, object).
Learning such relations is challenging as the objects have different spatial
configurations and appearances depending on the relation in which they occur.
Another major challenge comes from the difficulty to get annotations,
especially at box-level, for all possible triplets, which makes both learning
and evaluation difficult. The contributions of this paper are threefold. First,
we design strong yet flexible visual features that encode the appearance and
spatial configuration for pairs of objects. Second, we propose a
weakly-supervised discriminative clustering model to learn relations from
image-level labels only. Third we introduce a new challenging dataset of
unusual relations (UnRel) together with an exhaustive annotation, that enables
accurate evaluation of visual relation retrieval. We show experimentally that
our model results in state-of-the-art results on the visual relationship
dataset significantly improving performance on previously unseen relations
(zero-shot learning), and confirm this observation on our newly introduced
UnRel dataset
Learning Social Relation Traits from Face Images
Social relation defines the association, e.g, warm, friendliness, and
dominance, between two or more people. Motivated by psychological studies, we
investigate if such fine-grained and high-level relation traits can be
characterised and quantified from face images in the wild. To address this
challenging problem we propose a deep model that learns a rich face
representation to capture gender, expression, head pose, and age-related
attributes, and then performs pairwise-face reasoning for relation prediction.
To learn from heterogeneous attribute sources, we formulate a new network
architecture with a bridging layer to leverage the inherent correspondences
among these datasets. It can also cope with missing target attribute labels.
Extensive experiments show that our approach is effective for fine-grained
social relation learning in images and videos.Comment: To appear in International Conference on Computer Vision (ICCV) 201
- …