Semi-supervised learning (SSL) has been proven beneficial for mitigating the
issue of limited labeled data especially on the task of volumetric medical
image segmentation. Unlike previous SSL methods which focus on exploring highly
confident pseudo-labels or developing consistency regularization schemes, our
empirical findings suggest that inconsistent decoder features emerge naturally
when two decoders strive to generate consistent predictions. Based on the
observation, we first analyze the treasure of discrepancy in learning towards
consistency, under both pseudo-labeling and consistency regularization
settings, and subsequently propose a novel SSL method called LeFeD, which
learns the feature-level discrepancy obtained from two decoders, by feeding the
discrepancy as a feedback signal to the encoder. The core design of LeFeD is to
enlarge the difference by training differentiated decoders, and then learn from
the inconsistent information iteratively. We evaluate LeFeD against eight
state-of-the-art (SOTA) methods on three public datasets. Experiments show
LeFeD surpasses competitors without any bells and whistles such as uncertainty
estimation and strong constraints, as well as setting a new state-of-the-art
for semi-supervised medical image segmentation. Code is available at
\textcolor{cyan}{https://github.com/maxwell0027/LeFeD