Self-attention is of vital importance in semantic segmentation as it enables
modeling of long-range context, which translates into improved performance. We
argue that it is equally important to model short-range context, especially to
tackle cases where not only the regions of interest are small and ambiguous,
but also when there exists an imbalance between the semantic classes. To this
end, we propose Masked Supervised Learning (MaskSup), an effective single-stage
learning paradigm that models both short- and long-range context, capturing the
contextual relationships between pixels via random masking. Experimental
results demonstrate the competitive performance of MaskSup against strong
baselines in both binary and multi-class segmentation tasks on three standard
benchmark datasets, particularly at handling ambiguous regions and retaining
better segmentation of minority classes with no added inference cost. In
addition to segmenting target regions even when large portions of the input are
masked, MaskSup is also generic and can be easily integrated into a variety of
semantic segmentation methods. We also show that the proposed method is
computationally efficient, yielding an improved performance by 10\% on the mean
intersection-over-union (mIoU) while requiring 3× less learnable
parameters