Self-Supervised Learning for Semantic Segmentation of Archaeological Monuments in DTMs

Abstract

Deep learning models need a lot of labeled data to work well. In this study, we use a Self-Supervised Learning (SSL) method for semantic segmentation of archaeological monuments in Digital Terrain Models (DTMs). This method first uses unlabeled data to pretrain a model (pretext task), and then fine-tunes it with a small labeled dataset (downstream task). We use unlabeled DTMs and Relief Visualizations (RVs) to train an encoder-decoder and a Generative Adversarial Network (GAN) in the pretext task and an annotated DTM dataset to fine-tune a semantic segmentation model in the downstream task. Experiments indicate that this approach produces better results than training from scratch or using models pretrained on image data like ImageNet. The code and pretrained weights for the encoder-decoder and the GAN models are made available on Github

    Similar works