25,849 research outputs found
A Diagram Is Worth A Dozen Images
Diagrams are common tools for representing complex concepts, relationships
and events, often when it would be difficult to portray the same information
with natural images. Understanding natural images has been extensively studied
in computer vision, while diagram understanding has received little attention.
In this paper, we study the problem of diagram interpretation and reasoning,
the challenging task of identifying the structure of a diagram and the
semantics of its constituents and their relationships. We introduce Diagram
Parse Graphs (DPG) as our representation to model the structure of diagrams. We
define syntactic parsing of diagrams as learning to infer DPGs for diagrams and
study semantic interpretation and reasoning of diagrams in the context of
diagram question answering. We devise an LSTM-based method for syntactic
parsing of diagrams and introduce a DPG-based attention model for diagram
question answering. We compile a new dataset of diagrams with exhaustive
annotations of constituents and relationships for over 5,000 diagrams and
15,000 questions and answers. Our results show the significance of our models
for syntactic parsing and question answering in diagrams using DPGs
Deep Hierarchical Parsing for Semantic Segmentation
This paper proposes a learning-based approach to scene parsing inspired by
the deep Recursive Context Propagation Network (RCPN). RCPN is a deep
feed-forward neural network that utilizes the contextual information from the
entire image, through bottom-up followed by top-down context propagation via
random binary parse trees. This improves the feature representation of every
super-pixel in the image for better classification into semantic categories. We
analyze RCPN and propose two novel contributions to further improve the model.
We first analyze the learning of RCPN parameters and discover the presence of
bypass error paths in the computation graph of RCPN that can hinder contextual
propagation. We propose to tackle this problem by including the classification
loss of the internal nodes of the random parse trees in the original RCPN loss
function. Secondly, we use an MRF on the parse tree nodes to model the
hierarchical dependency present in the output. Both modifications provide
performance boosts over the original RCPN and the new system achieves
state-of-the-art performance on Stanford Background, SIFT-Flow and Daimler
urban datasets.Comment: IEEE CVPR 201
Weakly-supervised Visual Grounding of Phrases with Linguistic Structures
We propose a weakly-supervised approach that takes image-sentence pairs as
input and learns to visually ground (i.e., localize) arbitrary linguistic
phrases, in the form of spatial attention masks. Specifically, the model is
trained with images and their associated image-level captions, without any
explicit region-to-phrase correspondence annotations. To this end, we introduce
an end-to-end model which learns visual groundings of phrases with two types of
carefully designed loss functions. In addition to the standard discriminative
loss, which enforces that attended image regions and phrases are consistently
encoded, we propose a novel structural loss which makes use of the parse tree
structures induced by the sentences. In particular, we ensure complementarity
among the attention masks that correspond to sibling noun phrases, and
compositionality of attention masks among the children and parent phrases, as
defined by the sentence parse tree. We validate the effectiveness of our
approach on the Microsoft COCO and Visual Genome datasets.Comment: CVPR 201
- …