226,234 research outputs found
TextFormer: A Query-based End-to-End Text Spotter with Mixed Supervision
End-to-end text spotting is a vital computer vision task that aims to
integrate scene text detection and recognition into a unified framework.
Typical methods heavily rely on Region-of-Interest (RoI) operations to extract
local features and complex post-processing steps to produce final predictions.
To address these limitations, we propose TextFormer, a query-based end-to-end
text spotter with Transformer architecture. Specifically, using query embedding
per text instance, TextFormer builds upon an image encoder and a text decoder
to learn a joint semantic understanding for multi-task modeling. It allows for
mutual training and optimization of classification, segmentation, and
recognition branches, resulting in deeper feature sharing without sacrificing
flexibility or simplicity. Additionally, we design an Adaptive Global
aGgregation (AGG) module to transfer global features into sequential features
for reading arbitrarily-shaped texts, which overcomes the sub-optimization
problem of RoI operations. Furthermore, potential corpus information is
utilized from weak annotations to full labels through mixed supervision,
further improving text detection and end-to-end text spotting results.
Extensive experiments on various bilingual (i.e., English and Chinese)
benchmarks demonstrate the superiority of our method. Especially on TDA-ReCTS
dataset, TextFormer surpasses the state-of-the-art method in terms of 1-NED by
13.2%.Comment: MIR 2023, 15 page
Towards Robust Visual Information Extraction in Real World: New Dataset and Novel Solution
Visual information extraction (VIE) has attracted considerable attention
recently owing to its various advanced applications such as document
understanding, automatic marking and intelligent education. Most existing works
decoupled this problem into several independent sub-tasks of text spotting
(text detection and recognition) and information extraction, which completely
ignored the high correlation among them during optimization. In this paper, we
propose a robust visual information extraction system (VIES) towards real-world
scenarios, which is a unified end-to-end trainable framework for simultaneous
text detection, recognition and information extraction by taking a single
document image as input and outputting the structured information.
Specifically, the information extraction branch collects abundant visual and
semantic representations from text spotting for multimodal feature fusion and
conversely, provides higher-level semantic clues to contribute to the
optimization of text spotting. Moreover, regarding the shortage of public
benchmarks, we construct a fully-annotated dataset called EPHOIE
(https://github.com/HCIILAB/EPHOIE), which is the first Chinese benchmark for
both text spotting and visual information extraction. EPHOIE consists of 1,494
images of examination paper head with complex layouts and background, including
a total of 15,771 Chinese handwritten or printed text instances. Compared with
the state-of-the-art methods, our VIES shows significant superior performance
on the EPHOIE dataset and achieves a 9.01% F-score gain on the widely used
SROIE dataset under the end-to-end scenario.Comment: 8 pages, 5 figures, to be published in AAAI 202
A large-scale dataset for end-to-end table recognition in the wild
Table recognition (TR) is one of the research hotspots in pattern
recognition, which aims to extract information from tables in an image. Common
table recognition tasks include table detection (TD), table structure
recognition (TSR) and table content recognition (TCR). TD is to locate tables
in the image, TCR recognizes text content, and TSR recognizes spatial ogical
structure. Currently, the end-to-end TR in real scenarios, accomplishing the
three sub-tasks simultaneously, is yet an unexplored research area. One major
factor that inhibits researchers is the lack of a benchmark dataset. To this
end, we propose a new large-scale dataset named Table Recognition Set
(TabRecSet) with diverse table forms sourcing from multiple scenarios in the
wild, providing complete annotation dedicated to end-to-end TR research. It is
the largest and first bi-lingual dataset for end-to-end TR, with 38.1K tables
in which 20.4K are in English\, and 17.7K are in Chinese. The samples have
diverse forms, such as the border-complete and -incomplete table, regular and
irregular table (rotated, distorted, etc.). The scenarios are multiple in the
wild, varying from scanned to camera-taken images, documents to Excel tables,
educational test papers to financial invoices. The annotations are complete,
consisting of the table body spatial annotation, cell spatial logical
annotation and text content for TD, TSR and TCR, respectively. The spatial
annotation utilizes the polygon instead of the bounding box or quadrilateral
adopted by most datasets. The polygon spatial annotation is more suitable for
irregular tables that are common in wild scenarios. Additionally, we propose a
visualized and interactive annotation tool named TableMe to improve the
efficiency and quality of table annotation
- …