Vision Grid Transformer for Document Layout Analysis

Abstract

Document pre-trained models and grid-based models have proven to be very effective on various tasks in Document AI. However, for the document layout analysis (DLA) task, existing document pre-trained models, even those pre-trained in a multi-modal fashion, usually rely on either textual features or visual features. Grid-based models for DLA are multi-modality but largely neglect the effect of pre-training. To fully leverage multi-modal information and exploit pre-training techniques to learn better representation for DLA, in this paper, we present VGT, a two-stream Vision Grid Transformer, in which Grid Transformer (GiT) is proposed and pre-trained for 2D token-level and segment-level semantic understanding. Furthermore, a new dataset named D4^4LA, which is so far the most diverse and detailed manually-annotated benchmark for document layout analysis, is curated and released. Experiment results have illustrated that the proposed VGT model achieves new state-of-the-art results on DLA tasks, e.g. PubLayNet (95.7%95.7\%β†’\rightarrow96.2%96.2\%), DocBank (79.6%79.6\%β†’\rightarrow84.1%84.1\%), and D4^4LA (67.7%67.7\%β†’\rightarrow68.8%68.8\%). The code and models as well as the D4^4LA dataset will be made publicly available ~\url{https://github.com/AlibabaResearch/AdvancedLiterateMachinery}.Comment: Accepted by ICCV202

    Similar works

    Full text

    thumbnail-image

    Available Versions