Minimizing the need for pixel-level annotated data for training PET anomaly
segmentation networks is crucial, particularly due to time and cost constraints
related to expert annotations. Current un-/weakly-supervised anomaly detection
methods rely on autoencoder or generative adversarial networks trained only on
healthy data, although these are more challenging to train. In this work, we
present a weakly supervised and Implicitly guided COuNterfactual diffusion
model for Detecting Anomalies in PET images, branded as IgCONDA-PET. The
training is conditioned on image class labels (healthy vs. unhealthy) along
with implicit guidance to generate counterfactuals for an unhealthy image with
anomalies. The counterfactual generation process synthesizes the healthy
counterpart for a given unhealthy image, and the difference between the two
facilitates the identification of anomaly locations. The code is available at:
https://github.com/igcondapet/IgCONDA-PET.gitComment: 12 pages, 6 figures, 1 tabl