108,986 research outputs found
PMC-VQA: Visual Instruction Tuning for Medical Visual Question Answering
In this paper, we focus on the problem of Medical Visual Question Answering
(MedVQA), which is crucial in efficiently interpreting medical images with
vital clinic-relevant information. Firstly, we reframe the problem of MedVQA as
a generation task that naturally follows the human-machine interaction, we
propose a generative-based model for medical visual understanding by aligning
visual information from a pre-trained vision encoder with a large language
model. Secondly, we establish a scalable pipeline to construct a large-scale
medical visual question-answering dataset, named PMC-VQA, which contains 227k
VQA pairs of 149k images that cover various modalities or diseases. Thirdly, we
pre-train our proposed model on PMC-VQA and then fine-tune it on multiple
public benchmarks, e.g., VQA-RAD and SLAKE, outperforming existing work by a
large margin. Additionally, we propose a test set that has undergone manual
verification, which is significantly more challenging, even the best models
struggle to solve
Learning the meanings of function words from grounded language using a visual question answering model
Interpreting a seemingly-simple function word like "or", "behind", or "more"
can require logical, numerical, and relational reasoning. How are such words
learned by children? Prior acquisition theories have often relied on positing a
foundation of innate knowledge. Yet recent neural-network based visual question
answering models apparently can learn to use function words as part of
answering questions about complex visual scenes. In this paper, we study what
these models learn about function words, in the hope of better understanding
how the meanings of these words can be learnt by both models and children. We
show that recurrent models trained on visually grounded language learn gradient
semantics for function words requiring spacial and numerical reasoning.
Furthermore, we find that these models can learn the meanings of logical
connectives "and" and "or" without any prior knowledge of logical reasoning, as
well as early evidence that they can develop the ability to reason about
alternative expressions when interpreting language. Finally, we show that word
learning difficulty is dependent on frequency in models' input. Our findings
offer evidence that it is possible to learn the meanings of function words in
visually grounded context by using non-symbolic general statistical learning
algorithms, without any prior knowledge of linguistic meaning
Enhancing Document Information Analysis with Multi-Task Pre-training: A Robust Approach for Information Extraction in Visually-Rich Documents
This paper introduces a deep learning model tailored for document information
analysis, emphasizing document classification, entity relation extraction, and
document visual question answering. The proposed model leverages
transformer-based models to encode all the information present in a document
image, including textual, visual, and layout information. The model is
pre-trained and subsequently fine-tuned for various document image analysis
tasks. The proposed model incorporates three additional tasks during the
pre-training phase, including reading order identification of different layout
segments in a document image, layout segments categorization as per PubLayNet,
and generation of the text sequence within a given layout segment (text block).
The model also incorporates a collective pre-training scheme where losses of
all the tasks under consideration, including pre-training and fine-tuning tasks
with all datasets, are considered. Additional encoder and decoder blocks are
added to the RoBERTa network to generate results for all tasks. The proposed
model achieved impressive results across all tasks, with an accuracy of 95.87%
on the RVL-CDIP dataset for document classification, F1 scores of 0.9306,
0.9804, 0.9794, and 0.8742 on the FUNSD, CORD, SROIE, and Kleister-NDA datasets
respectively for entity relation extraction, and an ANLS score of 0.8468 on the
DocVQA dataset for visual question answering. The results highlight the
effectiveness of the proposed model in understanding and interpreting complex
document layouts and content, making it a promising tool for document analysis
tasks
Vision-and-Language Navigation: Interpreting visually-grounded navigation instructions in real environments
A robot that can carry out a natural-language instruction has been a dream
since before the Jetsons cartoon series imagined a life of leisure mediated by
a fleet of attentive robot helpers. It is a dream that remains stubbornly
distant. However, recent advances in vision and language methods have made
incredible progress in closely related areas. This is significant because a
robot interpreting a natural-language navigation instruction on the basis of
what it sees is carrying out a vision and language process that is similar to
Visual Question Answering. Both tasks can be interpreted as visually grounded
sequence-to-sequence translation problems, and many of the same methods are
applicable. To enable and encourage the application of vision and language
methods to the problem of interpreting visually-grounded navigation
instructions, we present the Matterport3D Simulator -- a large-scale
reinforcement learning environment based on real imagery. Using this simulator,
which can in future support a range of embodied vision and language tasks, we
provide the first benchmark dataset for visually-grounded natural language
navigation in real buildings -- the Room-to-Room (R2R) dataset.Comment: CVPR 2018 Spotlight presentatio
Semantic bottleneck for computer vision tasks
This paper introduces a novel method for the representation of images that is
semantic by nature, addressing the question of computation intelligibility in
computer vision tasks. More specifically, our proposition is to introduce what
we call a semantic bottleneck in the processing pipeline, which is a crossing
point in which the representation of the image is entirely expressed with
natural language , while retaining the efficiency of numerical representations.
We show that our approach is able to generate semantic representations that
give state-of-the-art results on semantic content-based image retrieval and
also perform very well on image classification tasks. Intelligibility is
evaluated through user centered experiments for failure detection
Evaluating the Representational Hub of Language and Vision Models
The multimodal models used in the emerging field at the intersection of
computational linguistics and computer vision implement the bottom-up
processing of the `Hub and Spoke' architecture proposed in cognitive science to
represent how the brain processes and combines multi-sensory inputs. In
particular, the Hub is implemented as a neural network encoder. We investigate
the effect on this encoder of various vision-and-language tasks proposed in the
literature: visual question answering, visual reference resolution, and
visually grounded dialogue. To measure the quality of the representations
learned by the encoder, we use two kinds of analyses. First, we evaluate the
encoder pre-trained on the different vision-and-language tasks on an existing
diagnostic task designed to assess multimodal semantic understanding. Second,
we carry out a battery of analyses aimed at studying how the encoder merges and
exploits the two modalities.Comment: Accepted to IWCS 201
- …