2,297 research outputs found
Bidirectional Captioning for Clinically Accurate and Interpretable Models
Vision-language pretraining has been shown to produce high-quality visual
encoders which transfer efficiently to downstream computer vision tasks. While
generative language models have gained widespread attention, image captioning
has thus far been mostly overlooked as a form of cross-modal pretraining in
favor of contrastive learning, especially in medical image analysis. In this
paper, we experiment with bidirectional captioning of radiology reports as a
form of pretraining and compare the quality and utility of learned embeddings
with those from contrastive pretraining methods. We optimize a CNN encoder,
transformer decoder architecture named RadTex for the radiology domain. Results
show that not only does captioning pretraining yield visual encoders that are
competitive with contrastive pretraining (CheXpert competition multi-label AUC
of 89.4%), but also that our transformer decoder is capable of generating
clinically relevant reports (captioning macro-F1 score of 0.349 using CheXpert
labeler) and responding to prompts with targeted, interactive outputs.Comment: 12 pages, 7 figures. Code release to follo
Excitation Backprop for RNNs
Deep models are state-of-the-art for many vision tasks including video action
recognition and video captioning. Models are trained to caption or classify
activity in videos, but little is known about the evidence used to make such
decisions. Grounding decisions made by deep networks has been studied in
spatial visual content, giving more insight into model predictions for images.
However, such studies are relatively lacking for models of spatiotemporal
visual content - videos. In this work, we devise a formulation that
simultaneously grounds evidence in space and time, in a single pass, using
top-down saliency. We visualize the spatiotemporal cues that contribute to a
deep model's classification/captioning output using the model's internal
representation. Based on these spatiotemporal cues, we are able to localize
segments within a video that correspond with a specific action, or phrase from
a caption, without explicitly optimizing/training for these tasks.Comment: CVPR 2018 Camera Ready Versio
Reasoning About Pragmatics with Neural Listeners and Speakers
We present a model for pragmatically describing scenes, in which contrastive
behavior results from a combination of inference-driven pragmatics and learned
semantics. Like previous learned approaches to language generation, our model
uses a simple feature-driven architecture (here a pair of neural "listener" and
"speaker" models) to ground language in the world. Like inference-driven
approaches to pragmatics, our model actively reasons about listener behavior
when selecting utterances. For training, our approach requires only ordinary
captions, annotated _without_ demonstration of the pragmatic behavior the model
ultimately exhibits. In human evaluations on a referring expression game, our
approach succeeds 81% of the time, compared to a 69% success rate using
existing techniques
Distinctive-attribute Extraction for Image Captioning
Image captioning, an open research issue, has been evolved with the progress
of deep neural networks. Convolutional neural networks (CNNs) and recurrent
neural networks (RNNs) are employed to compute image features and generate
natural language descriptions in the research. In previous works, a caption
involving semantic description can be generated by applying additional
information into the RNNs. In this approach, we propose a distinctive-attribute
extraction (DaE) which explicitly encourages significant meanings to generate
an accurate caption describing the overall meaning of the image with their
unique situation. Specifically, the captions of training images are analyzed by
term frequency-inverse document frequency (TF-IDF), and the analyzed semantic
information is trained to extract distinctive-attributes for inferring
captions. The proposed scheme is evaluated on a challenge data, and it improves
an objective performance while describing images in more detail.Comment: 14 main pages, 4 supplementary page
Transform, Contrast and Tell: Coherent Entity-Aware Multi-Image Captioning
Coherent entity-aware multi-image captioning aims to generate coherent
captions for neighboring images in a news document. There are coherence
relationships among neighboring images because they often describe same
entities or events. These relationships are important for entity-aware
multi-image captioning, but are neglected in entity-aware single-image
captioning. Most existing work focuses on single-image captioning, while
multi-image captioning has not been explored before. Hence, this paper proposes
a coherent entity-aware multi-image captioning model by making use of coherence
relationships. The model consists of a Transformer-based caption generation
model and two types of contrastive learning-based coherence mechanisms. The
generation model generates the caption by paying attention to the image and the
accompanying text. The caption-caption coherence mechanism aims to render
entities in the caption of the image be also in captions of neighboring images.
The caption-image-text coherence mechanism aims to render entities in the
caption of the image be also in the accompanying text. To evaluate coherence
between captions, two coherence evaluation metrics are proposed. The new
dataset DM800K is constructed that has more images per document than two
existing datasets GoodNews and NYT800K, and is more suitable for multi-image
captioning. Experiments on three datasets show the proposed captioning model
outperforms 7 baselines according to BLUE, Rouge, METEOR, and entity precision
and recall scores. Experiments also show that the generated captions are more
coherent than that of baselines according to caption entity scores, caption
Rouge scores, the two proposed coherence evaluation metrics, and human
evaluations.Comment: 32 pages, 11 tables, 3 figure
- …