Few-shot semantic segmentation task aims at performing segmentation in query
images with a few annotated support samples. Currently, few-shot segmentation
methods mainly focus on leveraging foreground information without fully
utilizing the rich background information, which could result in wrong
activation of foreground-like background regions with the inadaptability to
dramatic scene changes of support-query image pairs. Meanwhile, the lack of
detail mining mechanism could cause coarse parsing results without some
semantic components or edge areas since prototypes have limited ability to cope
with large object appearance variance. To tackle these problems, we propose a
progressively dual prior guided few-shot semantic segmentation network.
Specifically, a dual prior mask generation (DPMG) module is firstly designed to
suppress the wrong activation in foreground-background comparison manner by
regarding background as assisted refinement information. With dual prior masks
refining the location of foreground area, we further propose a progressive
semantic detail enrichment (PSDE) module which forces the parsing model to
capture the hidden semantic details by iteratively erasing the high-confidence
foreground region and activating details in the rest region with a hierarchical
structure. The collaboration of DPMG and PSDE formulates a novel few-shot
segmentation network that can be learned in an end-to-end manner. Comprehensive
experiments on PASCAL-5i and MS COCO powerfully demonstrate that our proposed
algorithm achieves the great performance