237 research outputs found
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
A Review on Skin Disease Classification and Detection Using Deep Learning Techniques
Skin cancer ranks among the most dangerous cancers. Skin cancers are commonly referred to as Melanoma. Melanoma is brought on by genetic faults or mutations on the skin, which are caused by Unrepaired Deoxyribonucleic Acid (DNA) in skin cells. It is essential to detect skin cancer in its infancy phase since it is more curable in its initial phases. Skin cancer typically progresses to other regions of the body. Owing to the disease's increased frequency, high mortality rate, and prohibitively high cost of medical treatments, early diagnosis of skin cancer signs is crucial. Due to the fact that how hazardous these disorders are, scholars have developed a number of early-detection techniques for melanoma. Lesion characteristics such as symmetry, colour, size, shape, and others are often utilised to detect skin cancer and distinguish benign skin cancer from melanoma. An in-depth investigation of deep learning techniques for melanoma's early detection is provided in this study. This study discusses the traditional feature extraction-based machine learning approaches for the segmentation and classification of skin lesions. Comparison-oriented research has been conducted to demonstrate the significance of various deep learning-based segmentation and classification approaches
Hyperspectral Imaging Reveals Spectral Differences and Can Distinguish Malignant Melanoma from Pigmented Basal Cell Carcinomas : A Pilot Study
Pigmented basal cell carcinomas can be difficult to distinguish from melanocytic tumours. Hyperspectral imaging is a non-invasive imaging technique that measures the reflectance spectra of skin in vivo. The aim of this prospective pilot study was to use a convolutional neural network classifier in hyperspectral images for differential diagnosis between pigmented basal cell carcinomas and melanoma. A total of 26 pigmented lesions (10 pigmented basal cell carcinomas, 12 melanomas in situ, 4 invasive melanomas) were imaged with hyperspectral imaging and excised for histopatho-logical diagnosis. For 2-class classifier (melano-cytic tumours vs pigmented basal cell carcinomas) using the majority of the pixels to predict the class of the whole lesion, the results showed a sensitivity of 100% (95% confidence interval 81-100%), specificity of 90% (95% confidence interval 60-98%) and positive predictive value of 94% (95% confidence interval 73-99%). These results indicate that a convolutional neural network classifier can differentiate melanocytic tumours from pigmented basal cell carcinomas in hyperspectral images. Further studies are warranted in order to confirm these preliminary results, using larger samples and multiple tumour types, including all types of melanocytic lesions.Peer reviewe
Few Shot Learning in Histopathological Images:Reducing the Need of Labeled Data on Biological Datasets
Although deep learning pathology diagnostic algorithms are proving comparable results with human experts in a wide variety of tasks, they still require a huge amount of well annotated data for training. Generating such extensive and well labelled datasets is time consuming and is not feasible for certain tasks and so, most of the medical datasets available are scarce in images and therefore, not enough for training. In this work we validate that the use of few shot learning techniques can transfer knowledge from a well defined source domain from Colon tissue into a more generic domain composed by Colon, Lung and Breast tissue by using very few training images. Our results show that our few-shot approach is able to obtain a balanced accuracy (BAC) of 90% with just 60 training images, even for the Lung and Breast tissues that were not present on the training set. This outperforms the finetune transfer learning approach that obtains 73% BAC with 60 images and requires 600 images to get up to 81% BAC.This study has received funding from the European
Unionâs Horizon 2020 research and innovation programme
under grant agreement No. 732111 (PICCOLO project)
Artificial Intelligence in Cutaneous Oncology
Skin cancer, previously known to be a common disease in Western countries, is becoming more common in Asian countries. Skin cancer differs from other carcinomas in that it is visible to our eyes. Although skin biopsy is essential for the diagnosis of skin cancer, decisions regarding whether or not to conduct a biopsy are made by an experienced dermatologist. From this perspective, it is easy to obtain and store photos using a smartphone, and artificial intelligence technologies developed to analyze these photos can represent a useful tool to complement the dermatologist's knowledge. In addition, the universal use of dermoscopy, which allows for non-invasive inspection of the upper dermal level of skin lesions with a usual 10-fold magnification, adds to the image storage and analysis techniques, foreshadowing breakthroughs in skin cancer diagnosis. Current problems include the inaccuracy of the available technology and resulting legal liabilities. This paper presents a comprehensive review of the clinical applications of artificial intelligence and a discussion on how it can be implemented in the field of cutaneous oncology.ope
Systematic literature review of dermoscopic pigmented skin lesions classification using convolutional neural network (CNN)
The occurrence of pigmented skin lesions (PSL), including melanoma, are rising, and early detection is crucial for reducing mortality. To assist Pigmented skin lesions, including melanoma, are rising, and early detection is crucial in reducing mortality. To aid dermatologists in early detection, computational techniques have been developed. This research conducted a systematic literature review (SLR) to identify research goals, datasets, methodologies, and performance evaluation methods used in categorizing dermoscopic lesions. This review focuses on using convolutional neural networks (CNNs) in analyzing PSL. Based on specific inclusion and exclusion criteria, the review included 54 primary studies published on Scopus and PubMed between 2018 and 2022. The results showed that ResNet and self-developed CNN were used in 22% of the studies, followed by Ensemble at 20% and DenseNet at 9%. Public datasets such as ISIC 2019 were predominantly used, and 85% of the classifiers used were softmax. The findings suggest that the input, architecture, and output/feature modifications can enhance the model's performance, although improving sensitivity in multiclass classification remains a challenge. While there is no specific model approach to solve the problem in this area, we recommend simultaneously modifying the three clusters to improve the model's performance
Saliency-Enhanced Content-Based Image Retrieval for Diagnosis Support in Dermatology Consultation: Reader Study.
BACKGROUND
Previous research studies have demonstrated that medical content image retrieval can play an important role by assisting dermatologists in skin lesion diagnosis. However, current state-of-the-art approaches have not been adopted in routine consultation, partly due to the lack of interpretability limiting trust by clinical users.
OBJECTIVE
This study developed a new image retrieval architecture for polarized or dermoscopic imaging guided by interpretable saliency maps. This approach provides better feature extraction, leading to better quantitative retrieval performance as well as providing interpretability for an eventual real-world implementation.
METHODS
Content-based image retrieval (CBIR) algorithms rely on the comparison of image features embedded by convolutional neural network (CNN) against a labeled data set. Saliency maps are computer vision-interpretable methods that highlight the most relevant regions for the prediction made by a neural network. By introducing a fine-tuning stage that includes saliency maps to guide feature extraction, the accuracy of image retrieval is optimized. We refer to this approach as saliency-enhanced CBIR (SE-CBIR). A reader study was designed at the University Hospital Zurich Dermatology Clinic to evaluate SE-CBIR's retrieval accuracy as well as the impact of the participant's confidence on the diagnosis.
RESULTS
SE-CBIR improved the retrieval accuracy by 7% (77% vs 84%) when doing single-lesion retrieval against traditional CBIR. The reader study showed an overall increase in classification accuracy of 22% (62% vs 84%) when the participant is provided with SE-CBIR retrieved images. In addition, the overall confidence in the lesion's diagnosis increased by 24%. Finally, the use of SE-CBIR as a support tool helped the participants reduce the number of nonmelanoma lesions previously diagnosed as melanoma (overdiagnosis) by 53%.
CONCLUSIONS
SE-CBIR presents better retrieval accuracy compared to traditional CBIR CNN-based approaches. Furthermore, we have shown how these support tools can help dermatologists and residents improve diagnosis accuracy and confidence. Additionally, by introducing interpretable methods, we should expect increased acceptance and use of these tools in routine consultation
Saliency-Enhanced Content-Based Image Retrieval for Diagnosis Support in Dermatology Consultation: Reader Study
BACKGROUND
Previous research studies have demonstrated that medical content image retrieval can play an important role by assisting dermatologists in skin lesion diagnosis. However, current state-of-the-art approaches have not been adopted in routine consultation, partly due to the lack of interpretability limiting trust by clinical users.
OBJECTIVE
This study developed a new image retrieval architecture for polarized or dermoscopic imaging guided by interpretable saliency maps. This approach provides better feature extraction, leading to better quantitative retrieval performance as well as providing interpretability for an eventual real-world implementation.
METHODS
Content-based image retrieval (CBIR) algorithms rely on the comparison of image features embedded by convolutional neural network (CNN) against a labeled data set. Saliency maps are computer vision-interpretable methods that highlight the most relevant regions for the prediction made by a neural network. By introducing a fine-tuning stage that includes saliency maps to guide feature extraction, the accuracy of image retrieval is optimized. We refer to this approach as saliency-enhanced CBIR (SE-CBIR). A reader study was designed at the University Hospital Zurich Dermatology Clinic to evaluate SE-CBIR's retrieval accuracy as well as the impact of the participant's confidence on the diagnosis.
RESULTS
SE-CBIR improved the retrieval accuracy by 7% (77% vs 84%) when doing single-lesion retrieval against traditional CBIR. The reader study showed an overall increase in classification accuracy of 22% (62% vs 84%) when the participant is provided with SE-CBIR retrieved images. In addition, the overall confidence in the lesion's diagnosis increased by 24%. Finally, the use of SE-CBIR as a support tool helped the participants reduce the number of nonmelanoma lesions previously diagnosed as melanoma (overdiagnosis) by 53%.
CONCLUSIONS
SE-CBIR presents better retrieval accuracy compared to traditional CBIR CNN-based approaches. Furthermore, we have shown how these support tools can help dermatologists and residents improve diagnosis accuracy and confidence. Additionally, by introducing interpretable methods, we should expect increased acceptance and use of these tools in routine consultation
Artificial intelligence for breast cancer precision pathology
Breast cancer is the most common cancer type in women globally but is associated with a
continuous decline in mortality rates. The improved prognosis can be partially attributed to
effective treatments developed for subgroups of patients. However, nowadays, it remains
challenging to optimise treatment plans for each individual. To improve disease outcome and
to decrease the burden associated with unnecessary treatment and adverse drug effects, the
current thesis aimed to develop artificial intelligence based tools to improve individualised
medicine for breast cancer patients.
In study I, we developed a deep learning based model (DeepGrade) to stratify patients that
were associated with intermediate risks. The model was optimised with haematoxylin and eosin
(HE) stained whole slide images (WSIs) with grade 1 and 3 tumours and applied to stratify
grade 2 tumours into grade 1-like (DG2-low) and grade 3-like (DG2-high) subgroups. The
efficacy of the DeepGrade model was validated using recurrence free survival where the
dichotomised groups exhibited an adjusted hazard ratio (HR) of 2.94 (95% confidence interval
[CI] 1.24-6.97, P = 0.015). The observation was further confirmed in the external test cohort
with an adjusted HR of 1.91 (95% CI: 1.11-3.29, P = 0.019).
In study II, we investigated whether deep learning models were capable of predicting gene
expression levels using the morphological patterns from tumours. We optimised convolutional
neural networks (CNNs) to predict mRNA expression for 17,695 genes using HE stained WSIs
from the training set. An initial evaluation on the validation set showed that a significant
correlation between the RNA-seq measurements and model predictions was observed for
52.75% of the genes. The models were further tested in the internal and external test sets.
Besides, we compared the model's efficacy in predicting RNA-seq based proliferation scores.
Lastly, the ability of capturing spatial gene expression variations for the optimised CNNs was
evaluated and confirmed using spatial transcriptomics profiling.
In study III, we investigated the relationship between intra-tumour gene expression
heterogeneity and patient survival outcomes. Deep learning models optimised from study II
were applied to generate spatial gene expression predictions for the PAM50 gene panel. A set
of 11 texture based features and one slide average gene expression feature per gene were
extracted as input to train a Cox proportional hazards regression model with elastic net
regularisation to predict patient risk of recurrence. Through nested cross-validation, the model
dichotomised the training cohort into low and high risk groups with an adjusted HR of 2.1
(95% CI: 1.30-3.30, P = 0.002). The model was further validated on two external cohorts.
In study IV, we investigated the agreement between the Stratipath Breast, which is the
modified, commercialised DeepGrade model developed in study I, and the ProsignaÂź test.
Both tests sought to stratify patients with distinct prognosis. The outputs from Stratipath Breast
comprise a risk score and a two-level risk stratification whereas the outputs from ProsignaÂź
include the risk of recurrence score and a three-tier risk stratification. By comparing the number
of patients assigned to âlowâ or âhighâ risk groups, we found an overall moderate agreement
(76.09%) between the two tests. Besides, the risk scores by two tests also revealed a good
correlation (Spearman's rho = 0.59, P = 1.16E-08). In addition, a good correlation was observed
between the risk score from each test and the Ki67 index. The comparison was also carried out
in the subgroup of patients with grade 2 tumours where similar but slightly dropped correlations
were found
- âŠ