867 research outputs found
Enhancing Document Information Analysis with Multi-Task Pre-training: A Robust Approach for Information Extraction in Visually-Rich Documents
This paper introduces a deep learning model tailored for document information
analysis, emphasizing document classification, entity relation extraction, and
document visual question answering. The proposed model leverages
transformer-based models to encode all the information present in a document
image, including textual, visual, and layout information. The model is
pre-trained and subsequently fine-tuned for various document image analysis
tasks. The proposed model incorporates three additional tasks during the
pre-training phase, including reading order identification of different layout
segments in a document image, layout segments categorization as per PubLayNet,
and generation of the text sequence within a given layout segment (text block).
The model also incorporates a collective pre-training scheme where losses of
all the tasks under consideration, including pre-training and fine-tuning tasks
with all datasets, are considered. Additional encoder and decoder blocks are
added to the RoBERTa network to generate results for all tasks. The proposed
model achieved impressive results across all tasks, with an accuracy of 95.87%
on the RVL-CDIP dataset for document classification, F1 scores of 0.9306,
0.9804, 0.9794, and 0.8742 on the FUNSD, CORD, SROIE, and Kleister-NDA datasets
respectively for entity relation extraction, and an ANLS score of 0.8468 on the
DocVQA dataset for visual question answering. The results highlight the
effectiveness of the proposed model in understanding and interpreting complex
document layouts and content, making it a promising tool for document analysis
tasks
Vision-Enhanced Semantic Entity Recognition in Document Images via Visually-Asymmetric Consistency Learning
Extracting meaningful entities belonging to predefined categories from
Visually-rich Form-like Documents (VFDs) is a challenging task. Visual and
layout features such as font, background, color, and bounding box location and
size provide important cues for identifying entities of the same type. However,
existing models commonly train a visual encoder with weak cross-modal
supervision signals, resulting in a limited capacity to capture these
non-textual features and suboptimal performance. In this paper, we propose a
novel \textbf{V}isually-\textbf{A}symmetric co\textbf{N}sisten\textbf{C}y
\textbf{L}earning (\textsc{Vancl}) approach that addresses the above limitation
by enhancing the model's ability to capture fine-grained visual and layout
features through the incorporation of color priors. Experimental results on
benchmark datasets show that our approach substantially outperforms the strong
LayoutLM series baseline, demonstrating the effectiveness of our approach.
Additionally, we investigate the effects of different color schemes on our
approach, providing insights for optimizing model performance. We believe our
work will inspire future research on multimodal information extraction.Comment: 14 pages, 6 figures, Accepted by EMNLP202
DocParser: End-to-end OCR-free Information Extraction from Visually Rich Documents
Information Extraction from visually rich documents is a challenging task
that has gained a lot of attention in recent years due to its importance in
several document-control based applications and its widespread commercial
value. The majority of the research work conducted on this topic to date follow
a two-step pipeline. First, they read the text using an off-the-shelf Optical
Character Recognition (OCR) engine, then, they extract the fields of interest
from the obtained text. The main drawback of these approaches is their
dependence on an external OCR system, which can negatively impact both
performance and computational speed. Recent OCR-free methods were proposed to
address the previous issues. Inspired by their promising results, we propose in
this paper an OCR-free end-to-end information extraction model named DocParser.
It differs from prior end-to-end approaches by its ability to better extract
discriminative character features. DocParser achieves state-of-the-art results
on various datasets, while still being faster than previous works.Comment: The 17th International Conference on Document Analysis and
Recognitio
- …