152 research outputs found
Fine-Grained is Too Coarse: A Novel Data-Centric Approach for Efficient Scene Graph Generation
Learning to compose visual relationships from raw images in the form of scene
graphs is a highly challenging task due to contextual dependencies, but it is
essential in computer vision applications that depend on scene understanding.
However, no current approaches in Scene Graph Generation (SGG) aim at providing
useful graphs for downstream tasks. Instead, the main focus has primarily been
on the task of unbiasing the data distribution for predicting more fine-grained
relations. That being said, all fine-grained relations are not equally relevant
and at least a part of them are of no use for real-world applications. In this
work, we introduce the task of Efficient SGG that prioritizes the generation of
relevant relations, facilitating the use of Scene Graphs in downstream tasks
such as Image Generation. To support further approaches in this task, we
present a new dataset, VG150-curated, based on the annotations of the popular
Visual Genome dataset. We show through a set of experiments that this dataset
contains more high-quality and diverse annotations than the one usually adopted
by approaches in SGG. Finally, we show the efficiency of this dataset in the
task of Image Generation from Scene Graphs. Our approach can be easily
replicated to improve the quality of other Scene Graph Generation datasets
Scene Graph Generation via Conditional Random Fields
Despite the great success object detection and segmentation models have
achieved in recognizing individual objects in images, performance on cognitive
tasks such as image caption, semantic image retrieval, and visual QA is far
from satisfactory. To achieve better performance on these cognitive tasks,
merely recognizing individual object instances is insufficient. Instead, the
interactions between object instances need to be captured in order to
facilitate reasoning and understanding of the visual scenes in an image. Scene
graph, a graph representation of images that captures object instances and
their relationships, offers a comprehensive understanding of an image. However,
existing techniques on scene graph generation fail to distinguish subjects and
objects in the visual scenes of images and thus do not perform well with
real-world datasets where exist ambiguous object instances. In this work, we
propose a novel scene graph generation model for predicting object instances
and its corresponding relationships in an image. Our model, SG-CRF, learns the
sequential order of subject and object in a relationship triplet, and the
semantic compatibility of object instance nodes and relationship nodes in a
scene graph efficiently. Experiments empirically show that SG-CRF outperforms
the state-of-the-art methods, on three different datasets, i.e., CLEVR, VRD,
and Visual Genome, raising the Recall@100 from 24.99% to 49.95%, from 41.92% to
50.47%, and from 54.69% to 54.77%, respectively
- …