24 research outputs found
Automatic Detection of Blue-White Veil and Related Structures in Dermoscopy Images
Dermoscopy is a non-invasive skin imaging technique, which permits
visualization of features of pigmented melanocytic neoplasms that are not
discernable by examination with the naked eye. One of the most important
features for the diagnosis of melanoma in dermoscopy images is the blue-white
veil (irregular, structureless areas of confluent blue pigmentation with an
overlying white "ground-glass" film). In this article, we present a machine
learning approach to the detection of blue-white veil and related structures in
dermoscopy images. The method involves contextual pixel classification using a
decision tree classifier. The percentage of blue-white areas detected in a
lesion combined with a simple shape descriptor yielded a sensitivity of 69.35%
and a specificity of 89.97% on a set of 545 dermoscopy images. The sensitivity
rises to 78.20% for detection of blue veil in those cases where it is a primary
feature for melanoma recognition
Approximate Lesion Localization in Dermoscopy Images
Background: Dermoscopy is one of the major imaging modalities used in the
diagnosis of melanoma and other pigmented skin lesions. Due to the difficulty
and subjectivity of human interpretation, automated analysis of dermoscopy
images has become an important research area. Border detection is often the
first step in this analysis. Methods: In this article, we present an
approximate lesion localization method that serves as a preprocessing step for
detecting borders in dermoscopy images. In this method, first the black frame
around the image is removed using an iterative algorithm. The approximate
location of the lesion is then determined using an ensemble of thresholding
algorithms. Results: The method is tested on a set of 428 dermoscopy images.
The localization error is quantified by a metric that uses dermatologist
determined borders as the ground truth. Conclusion: The results demonstrate
that the method presented here achieves both fast and accurate localization of
lesions in dermoscopy images
Improving Skin Lesion Segmentation via Stacked Adversarial Learning
Segmentation of skin lesions is an essential step in computer aided diagnosis (CAD) for the automated melanoma diagnosis. Recently, segmentation methods based on fully convolutional networks (FCNs) have achieved great success for general images. This success is primarily related to FCNs leveraging large labelled datasets to learn features that correspond to the shallow appearance and the deep semantics of the images. Such large labelled datasets, however, are usually not available for medical images. So researchers have used specific cost functions and post-processing algorithms to refine the coarse boundaries of the results to improve the FCN performance in skin lesion segmentation. These methods are heavily reliant on tuning many parameters and post-processing techniques. In this paper, we adopt the generative adversarial networks (GANs) given their inherent ability to produce consistent and realistic image features by using deep neural networks and adversarial learning concepts. We build upon the GAN with a novel stacked adversarial learning architecture such that skin lesion features can be learned, iteratively, in a class-specific manner. The outputs from our method are then added to the existing FCN training data, thus increasing the overall feature diversity. We evaluated our method on the ISIC 2017 skin lesion segmentation challenge dataset; we show that it is more accurate and robust when compared to the existing skin state-of-the-art methods
Sector Expansion and Elliptical Modeling of Blue-Gray Ovoids for Basal Cell Carcinoma Discrimination in Dermoscopy Images
Background: Blue-gray ovoids (B-GOs), a critical dermoscopic structure for basal cell carcinoma (BCC), offer an opportunity for automatic detection of BCC. Due to variation in size and color, B-GOs can be easily mistaken for similar structures in benign lesions. Analysis of these structures could afford accurate characterization and automatic recognition of B-GOs, furthering the goal of automatic BCC detection. This study utilizes a novel segmentation method to discriminate B-GOs from their benign mimics.
Methods: Contact dermoscopy images of 68 confirmed BCCs with B-GOs were obtained. Another set of 131 contact dermoscopic images of benign lesions possessing B-GO mimics provided a benign competitive set. A total of 22 B-GO features were analyzed for all structures: 21 color features and one size feature. Regarding segmentation, this study utilized a novel sector-based, non-recursive segmentation method to expand the masks applied to the B-GOs and mimicking structures. Results: Logistic regression analysis determined that blue chromaticity was the best feature for discriminating true B-GOs in BCC from benign, mimicking structures. Discrimination of malignant structures was optimal when the final B-GO border was approximated by a best-fit ellipse. Using this optimal configuration, logistic regression analysis discriminated the expanded and fitted malignant structures from similar benign structures with a classification rate as high as 96.5%.
Conclusions: Experimental results show that color features allow accurate expansion and localization of structures from seed areas. Modeling these structures as ellipses allows high discrimination of B-GOs in BCCs from similar structures in benign images