Due to the enormous technical challenges and wide range of applications,
scene text recognition (STR) has been an active research topic in computer
vision for years. To tackle this tough problem, numerous innovative methods
have been successively proposed, and incorporating linguistic knowledge into
STR models has recently become a prominent trend. In this work, we first draw
inspiration from the recent progress in Vision Transformer (ViT) to construct a
conceptually simple yet functionally powerful vision STR model, which is built
upon ViT and a tailored Adaptive Addressing and Aggregation (A3) module. It
already outperforms most previous state-of-the-art models for scene text
recognition, including both pure vision models and language-augmented methods.
To integrate linguistic knowledge, we further propose a Multi-Granularity
Prediction strategy to inject information from the language modality into the
model in an implicit way, \ie, subword representations (BPE and WordPiece)
widely used in NLP are introduced into the output space, in addition to the
conventional character level representation, while no independent language
model (LM) is adopted. To produce the final recognition results, two strategies
for effectively fusing the multi-granularity predictions are devised. The
resultant algorithm (termed MGP-STR) is able to push the performance envelope
of STR to an even higher level. Specifically, MGP-STR achieves an average
recognition accuracy of 94% on standard benchmarks for scene text
recognition. Moreover, it also achieves state-of-the-art results on widely-used
handwritten benchmarks as well as more challenging scene text datasets,
demonstrating the generality of the proposed MGP-STR algorithm. The source code
and models will be available at:
\url{https://github.com/AlibabaResearch/AdvancedLiterateMachinery/tree/main/OCR/MGP-STR}.Comment: submitted to TPAMI; an extension to our previous ECCV 2022 paper
arXiv:2209.0359