Document pre-trained models and grid-based models have proven to be very
effective on various tasks in Document AI. However, for the document layout
analysis (DLA) task, existing document pre-trained models, even those
pre-trained in a multi-modal fashion, usually rely on either textual features
or visual features. Grid-based models for DLA are multi-modality but largely
neglect the effect of pre-training. To fully leverage multi-modal information
and exploit pre-training techniques to learn better representation for DLA, in
this paper, we present VGT, a two-stream Vision Grid Transformer, in which Grid
Transformer (GiT) is proposed and pre-trained for 2D token-level and
segment-level semantic understanding. Furthermore, a new dataset named D4LA,
which is so far the most diverse and detailed manually-annotated benchmark for
document layout analysis, is curated and released. Experiment results have
illustrated that the proposed VGT model achieves new state-of-the-art results
on DLA tasks, e.g. PubLayNet (95.7%β96.2%), DocBank
(79.6%β84.1%), and D4LA (67.7%β68.8%).
The code and models as well as the D4LA dataset will be made publicly
available ~\url{https://github.com/AlibabaResearch/AdvancedLiterateMachinery}.Comment: Accepted by ICCV202