Despite their impressive performance in various surgical scene understanding
tasks, deep learning-based methods are frequently hindered from deploying to
real-world surgical applications for various causes. Particularly, data
collection, annotation, and domain shift in-between sites and patients are the
most common obstacles. In this work, we mitigate data-related issues by
efficiently leveraging minimal source images to generate synthetic surgical
instrument segmentation datasets and achieve outstanding generalization
performance on unseen real domains. Specifically, in our framework, only one
background tissue image and at most three images of each foreground instrument
are taken as the seed images. These source images are extensively transformed
and employed to build up the foreground and background image pools, from which
randomly sampled tissue and instrument images are composed with multiple
blending techniques to generate new surgical scene images. Besides, we
introduce hybrid training-time augmentations to diversify the training data
further. Extensive evaluation on three real-world datasets, i.e., Endo2017,
Endo2018, and RoboTool, demonstrates that our one-to-many synthetic surgical
instruments datasets generation and segmentation framework can achieve
encouraging performance compared with training with real data. Notably, on the
RoboTool dataset, where a more significant domain gap exists, our framework
shows its superiority of generalization by a considerable margin. We expect
that our inspiring results will attract research attention to improving model
generalization with data synthesizing.Comment: First two authors contributed equally. Accepted by IROS202