Visual information extraction (VIE) plays an important role in Document
Intelligence. Generally, it is divided into two tasks: semantic entity
recognition (SER) and relation extraction (RE). Recently, pre-trained models
for documents have achieved substantial progress in VIE, particularly in SER.
However, most of the existing models learn the geometric representation in an
implicit way, which has been found insufficient for the RE task since geometric
information is especially crucial for RE. Moreover, we reveal another factor
that limits the performance of RE lies in the objective gap between the
pre-training phase and the fine-tuning phase for RE. To tackle these issues, we
propose in this paper a multi-modal framework, named GeoLayoutLM, for VIE.
GeoLayoutLM explicitly models the geometric relations in pre-training, which we
call geometric pre-training. Geometric pre-training is achieved by three
specially designed geometry-related pre-training tasks. Additionally, novel
relation heads, which are pre-trained by the geometric pre-training tasks and
fine-tuned for RE, are elaborately designed to enrich and enhance the feature
representation. According to extensive experiments on standard VIE benchmarks,
GeoLayoutLM achieves highly competitive scores in the SER task and
significantly outperforms the previous state-of-the-arts for RE (\eg, the F1
score of RE on FUNSD is boosted from 80.35\% to 89.45\%). The code and models
are publicly available at
https://github.com/AlibabaResearch/AdvancedLiterateMachinery/tree/main/DocumentUnderstanding/GeoLayoutLMComment: CVPR 2023 Highligh