36,335 research outputs found
Generalised Dice overlap as a deep learning loss function for highly unbalanced segmentations
Deep-learning has proved in recent years to be a powerful tool for image
analysis and is now widely used to segment both 2D and 3D medical images.
Deep-learning segmentation frameworks rely not only on the choice of network
architecture but also on the choice of loss function. When the segmentation
process targets rare observations, a severe class imbalance is likely to occur
between candidate labels, thus resulting in sub-optimal performance. In order
to mitigate this issue, strategies such as the weighted cross-entropy function,
the sensitivity function or the Dice loss function, have been proposed. In this
work, we investigate the behavior of these loss functions and their sensitivity
to learning rate tuning in the presence of different rates of label imbalance
across 2D and 3D segmentation tasks. We also propose to use the class
re-balancing properties of the Generalized Dice overlap, a known metric for
segmentation assessment, as a robust and accurate deep-learning loss function
for unbalanced tasks
Stable Feature Selection from Brain sMRI
Neuroimage analysis usually involves learning thousands or even millions of
variables using only a limited number of samples. In this regard, sparse
models, e.g. the lasso, are applied to select the optimal features and achieve
high diagnosis accuracy. The lasso, however, usually results in independent
unstable features. Stability, a manifest of reproducibility of statistical
results subject to reasonable perturbations to data and the model, is an
important focus in statistics, especially in the analysis of high dimensional
data. In this paper, we explore a nonnegative generalized fused lasso model for
stable feature selection in the diagnosis of Alzheimer's disease. In addition
to sparsity, our model incorporates two important pathological priors: the
spatial cohesion of lesion voxels and the positive correlation between the
features and the disease labels. To optimize the model, we propose an efficient
algorithm by proving a novel link between total variation and fast network flow
algorithms via conic duality. Experiments show that the proposed nonnegative
model performs much better in exploring the intrinsic structure of data via
selecting stable features compared with other state-of-the-arts
- …