CORE
CO
nnecting
RE
positories
Services
Services overview
Explore all CORE services
Access to raw data
API
Dataset
FastSync
Content discovery
Recommender
Discovery
OAI identifiers
OAI Resolver
Managing content
Dashboard
Bespoke contracts
Consultancy services
Support us
Support us
Membership
Sponsorship
Research partnership
About
About
About us
Our mission
Team
Blog
FAQs
Contact us
Community governance
Governance
Advisory Board
Board of supporters
Research network
Innovations
Our research
Labs
Transformer-based semantic segmentation for large-scale building footprint extraction from very-high resolution satellite images
Authors
Mohamed Barakat A. Gibril
Rami Al-Ruzouq
+5 more
Jan Bolcek
Omid Ghorbanzadeh
Ratiranjan Jena
Helmi Zulhaidi Mohd Shafri
Abdallah Shanableh
Publication date
1 January 2024
Publisher
Elsevier
Doi
Abstract
Extracting building footprints from extensive very-high spatial resolution (VHSR) remote sensing data is crucial for diverse applications, including surveying, urban studies, population estimation, identification of informal settlements, and disaster management. Although convolutional neural networks (CNNs) are commonly utilized for this purpose, their effectiveness is constrained by limitations in capturing long-range relationships and contextual details due to the localized nature of convolution operations. This study introduces the masked-attention mask transformer (Mask2Former), based on the Swin Transformer, for building footprint extraction from large-scale satellite imagery. To enhance the capture of large-scale semantic information and extract multiscale features, a hierarchical vision transformer with shifted windows (Swin Transformer) serves as the backbone network. An extensive analysis compares the efficiency and generalizability of Mask2Former with four CNN models (PSPNet, DeepLabV3+, UpperNet-ConvNext, and SegNeXt) and two transformer-based models (UpperNet-Swin and SegFormer) featuring different complexities. Results reveal superior performance of transformer-based models over CNN-based counterparts, showcasing exceptional generalization across diverse testing areas with varying building structures, heights, and sizes. Specifically, Mask2Former with the Swin transformer backbone achieves a mean intersection over union between 88% and 93%, along with a mean F-score (mF-score) ranging from 91% to 96.35% across various urban landscapes. © 2024 COSPA
Similar works
Full text
Open in the Core reader
Download PDF
Available Versions
Digital library of Brno University of Technology
See this paper in CORE
Go to the repository landing page
Download from data provider
oai:https://dspace.vut.cz:1101...
Last time updated on 04/07/2024
Universiti Putra Malaysia Institutional Repository
See this paper in CORE
Go to the repository landing page
Download from data provider
oai:psasir.upm.edu.my:112078
Last time updated on 05/02/2025
Digital library of Brno University of Technology
See this paper in CORE
Go to the repository landing page
Download from data provider
oai:dspace.vut.cz:11012/245513
Last time updated on 23/12/2025