374 research outputs found
Generalised Dice overlap as a deep learning loss function for highly unbalanced segmentations
Deep-learning has proved in recent years to be a powerful tool for image
analysis and is now widely used to segment both 2D and 3D medical images.
Deep-learning segmentation frameworks rely not only on the choice of network
architecture but also on the choice of loss function. When the segmentation
process targets rare observations, a severe class imbalance is likely to occur
between candidate labels, thus resulting in sub-optimal performance. In order
to mitigate this issue, strategies such as the weighted cross-entropy function,
the sensitivity function or the Dice loss function, have been proposed. In this
work, we investigate the behavior of these loss functions and their sensitivity
to learning rate tuning in the presence of different rates of label imbalance
across 2D and 3D segmentation tasks. We also propose to use the class
re-balancing properties of the Generalized Dice overlap, a known metric for
segmentation assessment, as a robust and accurate deep-learning loss function
for unbalanced tasks
J Regularization Improves Imbalanced Multiclass Segmentation
We propose a new loss formulation to further advance the multiclass segmentation of cluttered cells under weakly supervised conditions. When adding a Youden's J statistic regularization term to the cross entropy loss we improve the separation of touching and immediate cells, obtaining sharp segmentation boundaries with high adequacy. This regularization intrinsically supports class imbalance thus eliminating the necessity of explicitly using weights to balance training. Simulations demonstrate this capability and show how the regularization leads to correct results by helping advancing the optimization when cross entropy stagnates. We build upon our previous work on multiclass segmentation by adding yet another training class representing gaps between adjacent cells. This addition helps the classifier identify narrow gaps as background and no longer as touching regions. We present results of our methods for 2D and 3D images, from bright field images to confocal stacks containing different types of cells, and we show that they accurately segment individual cells after training with a limited number of images, some of which are poorly annotated
DefectNET: multi-class fault detection on highly-imbalanced datasets
As a data-driven method, the performance of deep convolutional neural
networks (CNN) relies heavily on training data. The prediction results of
traditional networks give a bias toward larger classes, which tend to be the
background in the semantic segmentation task. This becomes a major problem for
fault detection, where the targets appear very small on the images and vary in
both types and sizes. In this paper we propose a new network architecture,
DefectNet, that offers multi-class (including but not limited to) defect
detection on highly-imbalanced datasets. DefectNet consists of two parallel
paths, which are a fully convolutional network and a dilated convolutional
network to detect large and small objects respectively. We propose a hybrid
loss maximising the usefulness of a dice loss and a cross entropy loss, and we
also employ the leaky rectified linear unit (ReLU) to deal with rare occurrence
of some targets in training batches. The prediction results show that our
DefectNet outperforms state-of-the-art networks for detecting multi-class
defects with the average accuracy improvement of approximately 10% on a wind
turbine
- …