Synthetic aperture radar (SAR) data is becoming increasingly available to a
wide range of users through commercial service providers with resolutions
reaching 0.5m/px. Segmenting SAR data still requires skilled personnel,
limiting the potential for large-scale use. We show that it is possible to
automatically and reliably perform urban scene segmentation from next-gen
resolution SAR data (0.15m/px) using deep neural networks (DNNs), achieving a
pixel accuracy of 95.19% and a mean IoU of 74.67% with data collected over a
region of merely 2.2km2. The presented DNN is not only effective, but is
very small with only 63k parameters and computationally simple enough to
achieve a throughput of around 500Mpx/s using a single GPU. We further identify
that additional SAR receive antennas and data from multiple flights massively
improve the segmentation accuracy. We describe a procedure for generating a
high-quality segmentation ground truth from multiple inaccurate building and
road annotations, which has been crucial to achieving these segmentation
results