4,763 research outputs found
Automated skin lesion segmentation using multi-scale feature extraction scheme and dual-attention mechanism
Segmenting skin lesions from dermoscopic images is essential for diagnosing
skin cancer. But the automatic segmentation of these lesions is complicated due
to the poor contrast between the background and the lesion, image artifacts,
and unclear lesion boundaries. In this work, we present a deep learning model
for the segmentation of skin lesions from dermoscopic images. To deal with the
challenges of skin lesion characteristics, we designed a multi-scale feature
extraction module for extracting the discriminative features. Further in this
work, two attention mechanisms are developed to refine the post-upsampled
features and the features extracted by the encoder. This model is evaluated
using the ISIC2018 and ISBI2017 datasets. The proposed model outperformed all
the existing works and the top-ranked models in two competitions
Supervised Versus Unsupervised Deep Learning Based Methods for Skin Lesion Segmentation in Dermoscopy Images
Image segmentation is considered a crucial step in automatic dermoscopic image analysis as it affects the accuracy of subsequent steps. The huge progress in deep learning has recently revolutionized the image recognition and computer vision domains. In this paper, we compare a supervised deep learning based approach with an unsupervised deep learning based approach for the task of skin lesion segmentation in dermoscopy images. Results show that, by using the default parameter settings and network configurations proposed in the original approaches, although the unsupervised approach could detect fine structures of skin lesions in some occasions, the supervised approach shows much higher accuracy in terms of Dice coefficient and Jaccard index compared to the unsupervised approach, resulting in 77.7% vs. 40% and 67.2% vs. 30.4%, respectively. With a proposed modification to the unsupervised approach, the Dice and Jaccard values improved to 54.3% and 44%, respectively
- …