10 research outputs found
Masked Vision-Language Transformers for Scene Text Recognition
Scene text recognition (STR) enables computers to recognize and read the text
in various real-world scenes. Recent STR models benefit from taking linguistic
information in addition to visual cues into consideration. We propose a novel
Masked Vision-Language Transformers (MVLT) to capture both the explicit and the
implicit linguistic information. Our encoder is a Vision Transformer, and our
decoder is a multi-modal Transformer. MVLT is trained in two stages: in the
first stage, we design a STR-tailored pretraining method based on a masking
strategy; in the second stage, we fine-tune our model and adopt an iterative
correction method to improve the performance. MVLT attains superior results
compared to state-of-the-art STR models on several benchmarks. Our code and
model are available at https://github.com/onealwj/MVLT.Comment: The paper is accepted by the 33rd British Machine Vision Conference
(BMVC 2022
PGNet: Real-time Arbitrarily-Shaped Text Spotting with Point Gathering Network
The reading of arbitrarily-shaped text has received increasing research
attention. However, existing text spotters are mostly built on two-stage
frameworks or character-based methods, which suffer from either Non-Maximum
Suppression (NMS), Region-of-Interest (RoI) operations, or character-level
annotations. In this paper, to address the above problems, we propose a novel
fully convolutional Point Gathering Network (PGNet) for reading
arbitrarily-shaped text in real-time. The PGNet is a single-shot text spotter,
where the pixel-level character classification map is learned with proposed
PG-CTC loss avoiding the usage of character-level annotations. With a PG-CTC
decoder, we gather high-level character classification vectors from
two-dimensional space and decode them into text symbols without NMS and RoI
operations involved, which guarantees high efficiency. Additionally, reasoning
the relations between each character and its neighbors, a graph refinement
module (GRM) is proposed to optimize the coarse recognition and improve the
end-to-end performance. Experiments prove that the proposed method achieves
competitive accuracy, meanwhile significantly improving the running speed. In
particular, in Total-Text, it runs at 46.7 FPS, surpassing the previous
spotters with a large margin.Comment: 10 pages, 8 figures, AAAI 202
A joint study of deep learning-based methods for identity document image binarization and its influence on attribute recognition
Text recognition has benefited considerably from deep learning research, as well as the preprocessing methods included in its workflow. Identity documents are critical in the field of document analysis and should be thoroughly researched in relation to this workflow. We propose to examine the link between deep learning-based binarization and recognition algorithms for this sort of documents on the MIDV-500 and MIDV-2020 datasets. We provide a series of experiments to illustrate the relation between the quality of the collected images with respect to the binarization results, as well as the influence of its output on final recognition performance. We show that deep learning-based binarization solutions are affected by the capture quality, which implies that they still need significant improvements. We also show that proper binarization results can improve the performance for many recognition methods. Our retrained U-Net-bin outperformed all other binarization methods, and the best result in recognition was obtained by Paddle Paddle OCR v2