Star-convex shapes arise across bio-microscopy and radiology in the form of
nuclei, nodules, metastases, and other units. Existing instance segmentation
networks for such structures train on densely labeled instances for each
dataset, which requires substantial and often impractical manual annotation
effort. Further, significant reengineering or finetuning is needed when
presented with new datasets and imaging modalities due to changes in contrast,
shape, orientation, resolution, and density. We present AnyStar, a
domain-randomized generative model that simulates synthetic training data of
blob-like objects with randomized appearance, environments, and imaging physics
to train general-purpose star-convex instance segmentation networks. As a
result, networks trained using our generative model do not require annotated
images from unseen datasets. A single network trained on our synthesized data
accurately 3D segments C. elegans and P. dumerilii nuclei in fluorescence
microscopy, mouse cortical nuclei in micro-CT, zebrafish brain nuclei in EM,
and placental cotyledons in human fetal MRI, all without any retraining,
finetuning, transfer learning, or domain adaptation. Code is available at
https://github.com/neel-dey/AnyStar.Comment: Code available at https://github.com/neel-dey/AnySta