Deep learning based medical segmentation still presents a great challenge due to the lack of large-scale datasets in the medical domain. The existing publicly available datasets vary significantly in terms of imaging modalities and target anatomies. This paper presents a novel guided latent diffusion model for universal medical segmentation, capable of segmenting diverse anatomical structures using a single and unified architecture. Given a Contrastive Language-Image Pretraining (CLIP) embedding specifying the target anatomical structure, the proposed model leverages a collection of datasets covering the diverse structures which can segment any anatomical targets that are presented in the aggregated data. By performing diffusion fully in latent space, we achieve comparable results to pixel-space diffusion with significantly lower computational cost. The proposed methods demonstrates competitive performance against existing deep learning-based discriminative approaches on several benchmarks. Furthermore, it shows strong generalization capabilities on unseen datasets
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.