Domain adaptation is one of the prominent strategies for handling both domain
shift, that is widely encountered in large-scale land use/land cover map
calculation, and the scarcity of pixel-level ground truth that is crucial for
supervised semantic segmentation. Studies focusing on adversarial domain
adaptation via re-styling source domain samples, commonly through generative
adversarial networks, have reported varying levels of success, yet they suffer
from semantic inconsistencies, visual corruptions, and often require a large
number of target domain samples. In this letter, we propose a new unsupervised
domain adaptation method for the semantic segmentation of very high resolution
images, that i) leads to semantically consistent and noise-free images, ii)
operates with a single target domain sample (i.e. one-shot) and iii) at a
fraction of the number of parameters required from state-of-the-art methods.
More specifically an image-to-image translation paradigm is proposed, based on
an encoder-decoder principle where latent content representations are mixed
across domains, and a perceptual network module and loss function is further
introduced to enforce semantic consistency. Cross-city comparative experiments
have shown that the proposed method outperforms state-of-the-art domain
adaptation methods. Our source code will be available at
\url{https://github.com/Sarmadfismael/LRM_I2I}