Convolutional Neural Networks (CNNs) have shown remarkable progress in
medical image segmentation. However, lesion segmentation remains a challenge to
state-of-the-art CNN-based algorithms due to the variance in scales and shapes.
On the one hand, tiny lesions are hard to be delineated precisely from the
medical images which are often of low resolutions. On the other hand,
segmenting large-size lesions requires large receptive fields, which
exacerbates the first challenge. In this paper, we present a scale-aware
super-resolution network to adaptively segment lesions of various sizes from
the low-resolution medical images. Our proposed network contains dual branches
to simultaneously conduct lesion mask super-resolution and lesion image
super-resolution. The image super-resolution branch will provide more detailed
features for the segmentation branch, i.e., the mask super-resolution branch,
for fine-grained segmentation. Meanwhile, we introduce scale-aware dilated
convolution blocks into the multi-task decoders to adaptively adjust the
receptive fields of the convolutional kernels according to the lesion sizes. To
guide the segmentation branch to learn from richer high-resolution features, we
propose a feature affinity module and a scale affinity module to enhance the
multi-task learning of the dual branches. On multiple challenging lesion
segmentation datasets, our proposed network achieved consistent improvements
compared to other state-of-the-art methods.Comment: Journal paper under review. 10 pages. The first two authors
contributed equall