47,526 research outputs found
Cats or CAT scans: transfer learning from natural or medical image source datasets?
Transfer learning is a widely used strategy in medical image analysis.
Instead of only training a network with a limited amount of data from the
target task of interest, we can first train the network with other, potentially
larger source datasets, creating a more robust model. The source datasets do
not have to be related to the target task. For a classification task in lung CT
images, we could use both head CT images, or images of cats, as the source.
While head CT images appear more similar to lung CT images, the number and
diversity of cat images might lead to a better model overall. In this survey we
review a number of papers that have performed similar comparisons. Although the
answer to which strategy is best seems to be "it depends", we discuss a number
of research directions we need to take as a community, to gain more
understanding of this topic.Comment: Accepted to Current Opinion in Biomedical Engineerin
Error Corrective Boosting for Learning Fully Convolutional Networks with Limited Data
Training deep fully convolutional neural networks (F-CNNs) for semantic image
segmentation requires access to abundant labeled data. While large datasets of
unlabeled image data are available in medical applications, access to manually
labeled data is very limited. We propose to automatically create auxiliary
labels on initially unlabeled data with existing tools and to use them for
pre-training. For the subsequent fine-tuning of the network with manually
labeled data, we introduce error corrective boosting (ECB), which emphasizes
parameter updates on classes with lower accuracy. Furthermore, we introduce
SkipDeconv-Net (SD-Net), a new F-CNN architecture for brain segmentation that
combines skip connections with the unpooling strategy for upsampling. The
SD-Net addresses challenges of severe class imbalance and errors along
boundaries. With application to whole-brain MRI T1 scan segmentation, we
generate auxiliary labels on a large dataset with FreeSurfer and fine-tune on
two datasets with manual annotations. Our results show that the inclusion of
auxiliary labels and ECB yields significant improvements. SD-Net segments a 3D
scan in 7 secs in comparison to 30 hours for the closest multi-atlas
segmentation method, while reaching similar performance. It also outperforms
the latest state-of-the-art F-CNN models.Comment: Accepted at MICCAI 201
- …