51 research outputs found

    Generative Adversarial Networks based Skin Lesion Segmentation

    Full text link
    Skin cancer is a serious condition that requires accurate identification and treatment. One way to assist clinicians in this task is by using computer-aided diagnosis (CAD) tools that can automatically segment skin lesions from dermoscopic images. To this end, a new adversarial learning-based framework called EGAN has been developed. This framework uses an unsupervised generative network to generate accurate lesion masks. It consists of a generator module with a top-down squeeze excitation-based compound scaled path and an asymmetric lateral connection-based bottom-up path, and a discriminator module that distinguishes between original and synthetic masks. Additionally, a morphology-based smoothing loss is implemented to encourage the network to create smooth semantic boundaries of lesions. The framework is evaluated on the International Skin Imaging Collaboration (ISIC) Lesion Dataset 2018 and outperforms the current state-of-the-art skin lesion segmentation approaches with a Dice coefficient, Jaccard similarity, and Accuracy of 90.1%, 83.6%, and 94.5%, respectively. This represents a 2% increase in Dice Coefficient, 1% increase in Jaccard Index, and 1% increase in Accuracy

    Transition region based approach for skin lesion segmentation

    Get PDF
    Skin melanoma is a skin disease that affects nearly 40% of people globally. Manual detection of the area is a time-consuming process and requires expert knowledge. The application of computer vision techniques can simplify this. In this article, a novel unsupervised transition region based approach for skin lesion segmentation for melanoma detection is proposed. The method starts with Gaussian blurring of the green channel dermoscopic image. Further, the transition region is extracted using local variance features and a global thresholding operation. It achieves the region of interest (binary mask) using various morphological operations. Finally, the melanoma regions are segregated from normal skin regions using the binary mask. The proposed method is tested using DermQuest dataset along with ISIC 2017 dataset and it achieves better results as compared to other state of art methods in effectively segmenting the melanoma regions from the normal skin regions

    Does a Previous Segmentation Improve the Automatic Detection of Basal Cell Carcinoma Using Deep Neural Networks?

    Get PDF
    This article belongs to the Special Issue "Image Processing and Analysis for Preclinical and Clinical Applications"Basal Cell Carcinoma (BCC) is the most frequent skin cancer and its increasing incidence is producing a high overload in dermatology services. In this sense, it is convenient to aid physicians in detecting it soon. Thus, in this paper, we propose a tool for the detection of BCC to provide a prioritization in the teledermatology consultation. Firstly, we analyze if a previous segmentation of the lesion improves the ulterior classification of the lesion. Secondly, we analyze three deep neural networks and ensemble architectures to distinguish between BCC and nevus, and BCC and other skin lesions. The best segmentation results are obtained with a SegNet deep neural network. A 98% accuracy for distinguishing BCC from nevus and a 95% accuracy classifying BCC vs. all lesions have been obtained. The proposed algorithm outperforms the winner of the challenge ISIC 2019 in almost all the metrics. Finally, we can conclude that when deep neural networks are used to classify, a previous segmentation of the lesion does not improve the classification results. Likewise, the ensemble of different neural network configurations improves the classification performance compared with individual neural network classifiers. Regarding the segmentation step, supervised deep learning-based methods outperform unsupervised onesMinisterio de Economía y Competitividad DPI2016-81103-RFEDER-US, Junta de Andalucía US-1381640Fondo Social Europeo Iniciativa de Empleo Juvenil EJ3-83-

    COMPARATIVE STUDY FOR MELANOMA SEGMENTATION IN SKIN LESION IMAGES

    Get PDF
    Melanoma is the leading cause of fatalities among skin can-cers and the discovery of the pathology in the early stagesis essential to increase the chances of cure. Computationalmethods through medical imaging are being developed tofacilitate the detection of melanoma. To interpret informa-tion in these images eciently, it is necessary to isolate theaected region. In our research, a comparison was made be-tween segmentation techniques, rstly a method based onthe Otsu algorithm, secondly the K-means clustering algo-rithm and nally,the U-net deep learning was developed.The tests performed on the PH2 images base had promisingresults, especially U-net

    Deep Learning Models For Biomedical Data Analysis

    Get PDF
    The field of biomedical data analysis is a vibrant area of research dedicated to extracting valuable insights from a wide range of biomedical data sources, including biomedical images and genomics data. The emergence of deep learning, an artificial intelligence approach, presents significant prospects for enhancing biomedical data analysis and knowledge discovery. This dissertation focused on exploring innovative deep-learning methods for biomedical image processing and gene data analysis. During the COVID-19 pandemic, biomedical imaging data, including CT scans and chest x-rays, played a pivotal role in identifying COVID-19 cases by categorizing patient chest x-ray outcomes as COVID-19-positive or negative. While supervised deep learning methods have effectively recognized COVID-19 patterns in chest x-ray datasets, the availability of annotated training data remains limited. To address this challenge, the thesis introduced a semi-supervised deep learning model named ssResNet, built upon the Residual Neural Network (ResNet) architecture. The model combines supervised and unsupervised paths, incorporating a weighted supervised loss function to manage data imbalance. The strategies to diminish prediction uncertainty in deep learning models for critical applications like medical image processing is explore. It achieves this through an ensemble deep learning model, integrating bagging deep learning and model calibration techniques. This ensemble model not only boosts biomedical image segmentation accuracy but also reduces prediction uncertainty, as validated on a comprehensive chest x-ray image segmentation dataset. Furthermore, the thesis introduced an ensemble model integrating Proformer and ensemble learning methodologies. This model constructs multiple independent Proformers for predicting gene expression, their predictions are combined through weighted averaging to generate final predictions. Experimental outcomes underscore the efficacy of this ensemble model in enhancing prediction performance across various metrics. In conclusion, this dissertation advances biomedical data analysis by harnessing the potential of deep learning techniques. It devises innovative approaches for processing biomedical images and gene data. By leveraging deep learning\u27s capabilities, this work paves the way for further progress in biomedical data analytics and its applications within clinical contexts. Index Terms- biomedical data analysis, COVID-19, deep learning, ensemble learning, gene data analytics, medical image segmentation, prediction uncertainty, Proformer, Residual Neural Network (ResNet), semi-supervised learning

    U-Net and its variants for medical image segmentation: theory and applications

    Full text link
    U-net is an image segmentation technique developed primarily for medical image analysis that can precisely segment images using a scarce amount of training data. These traits provide U-net with a very high utility within the medical imaging community and have resulted in extensive adoption of U-net as the primary tool for segmentation tasks in medical imaging. The success of U-net is evident in its widespread use in all major image modalities from CT scans and MRI to X-rays and microscopy. Furthermore, while U-net is largely a segmentation tool, there have been instances of the use of U-net in other applications. As the potential of U-net is still increasing, in this review we look at the various developments that have been made in the U-net architecture and provide observations on recent trends. We examine the various innovations that have been made in deep learning and discuss how these tools facilitate U-net. Furthermore, we look at image modalities and application areas where U-net has been applied.Comment: 42 pages, in IEEE Acces

    Risk prediction analysis for post-surgical complications in cardiothoracic surgery

    Get PDF
    Cardiothoracic surgery patients have the risk of developing surgical site infections (SSIs), which causes hospital readmissions, increases healthcare costs and may lead to mortality. The first 30 days after hospital discharge are crucial for preventing these kind of infections. As an alternative to a hospital-based diagnosis, an automatic digital monitoring system can help with the early detection of SSIs by analyzing daily images of patient’s wounds. However, analyzing a wound automatically is one of the biggest challenges in medical image analysis. The proposed system is integrated into a research project called CardioFollowAI, which developed a digital telemonitoring service to follow-up the recovery of cardiothoracic surgery patients. This present work aims to tackle the problem of SSIs by predicting the existence of worrying alterations in wound images taken by patients, with the help of machine learning and deep learning algorithms. The developed system is divided into a segmentation model which detects the wound region area and categorizes the wound type, and a classification model which predicts the occurrence of alterations in the wounds. The dataset consists of 1337 images with chest wounds (WC), drainage wounds (WD) and leg wounds (WL) from 34 cardiothoracic surgery patients. For segmenting the images, an architecture with a Mobilenet encoder and an Unet decoder was used to obtain the regions of interest (ROI) and attribute the wound class. The following model was divided into three sub-classifiers for each wound type, in order to improve the model’s performance. Color and textural features were extracted from the wound’s ROIs to feed one of the three machine learning classifiers (random Forest, support vector machine and K-nearest neighbors), that predict the final output. The segmentation model achieved a final mean IoU of 89.9%, a dice coefficient of 94.6% and a mean average precision of 90.1%, showing good results. As for the algorithms that performed classification, the WL classifier exhibited the best results with a 87.6% recall and 52.6% precision, while WC classifier achieved a 71.4% recall and 36.0% precision. The WD had the worst performance with a 68.4% recall and 33.2% precision. The obtained results demonstrate the feasibility of this solution, which can be a start for preventing SSIs through image analysis with artificial intelligence.Os pacientes submetidos a uma cirurgia cardiotorácica tem o risco de desenvolver infeções no local da ferida cirúrgica, o que pode consequentemente levar a readmissões hospitalares, ao aumento dos custos na saúde e à mortalidade. Os primeiros 30 dias após a alta hospitalar são cruciais na prevenção destas infecções. Assim, como alternativa ao diagnóstico no hospital, a utilização diária de um sistema digital e automático de monotorização em imagens de feridas cirúrgicas pode ajudar na precoce deteção destas infeções. No entanto, a análise automática de feridas é um dos grandes desafios em análise de imagens médicas. O sistema proposto integra um projeto de investigação designado CardioFollow.AI, que desenvolveu um serviço digital de telemonitorização para realizar o follow-up da recuperação dos pacientes de cirurgia cardiotorácica. Neste trabalho, o problema da infeção de feridas cirúrgicas é abordado, através da deteção de alterações preocupantes na ferida com ajuda de algoritmos de aprendizagem automática. O sistema desenvolvido divide-se num modelo de segmentação, que deteta a região da ferida e a categoriza consoante o seu tipo, e num modelo de classificação que prevê a existência de alterações na ferida. O conjunto de dados consistiu em 1337 imagens de feridas do peito (WC), feridas dos tubos de drenagem (WD) e feridas da perna (WL), provenientes de 34 pacientes de cirurgia cardiotorácica. A segmentação de imagem foi realizada através da combinação de Mobilenet como codificador e Unet como decodificador, de forma a obter-se as regiões de interesse e atribuir a classe da ferida. O modelo seguinte foi dividido em três subclassificadores para cada tipo de ferida, de forma a melhorar a performance do modelo. Caraterísticas de cor e textura foram extraídas da região da ferida para serem introduzidas num dos modelos de aprendizagem automática de forma a prever a classificação final (Random Forest, Support Vector Machine and K-Nearest Neighbors). O modelo de segmentação demonstrou bons resultados ao obter um IoU médio final de 89.9%, um dice de 94.6% e uma média de precisão de 90.1%. Relativamente aos algoritmos que realizaram a classificação, o classificador WL exibiu os melhores resultados com 87.6% de recall e 62.6% de precisão, enquanto o classificador das WC conseguiu um recall de 71.4% e 36.0% de precisão. Por fim, o classificador das WD teve a pior performance com um recall de 68.4% e 33.2% de precisão. Os resultados obtidos demonstram a viabilidade desta solução, que constitui o início da prevenção de infeções em feridas cirúrgica a partir da análise de imagem, com recurso a inteligência artificial

    Segmentation of Benign and Malign lesions on skin images using U-Net

    Get PDF
    One of the types of cancer that requires early diagnosis is skin cancer. Melanoma is a deadly type of skin cancer. Computer-aided systems can detect the findings in medical examinations that human perception cannot recognize, and these findings can help the clinicans to make an early diagnosis. Therefore, the need for computer aided systems has increased. In this study, a deep learning-based method that segments melanoma with color images taken from dermoscopy devices is proposed. For this method, ISIC 2017 (International Skin Image Collaboration) database is used. It contains 1403 training and 597 test data. The method is based on preprocessing and U-Net architecture. Gaussian and Difference of Gaussian (DoG) filters are used in the preprocessing stage. It is aimed to make skin images more convenient before U-Net. As a result of the segmentation performed with these data, the education success rate reached 96-95%. A high similarity coefficient obtained. On the other hand, as a result of the training of the preprocessed data, accuracy rate has reached 86-85%

    Artificial Intelligence for Skin Lesion Analysis based on Computer Vision and Deep Learning

    Get PDF
    Skin lesions appear in various sizes and forms and can be localised in one place or spread across the whole body due to different conditions. Dermatologists typically undertake physical examinations to diagnose skin lesions. However, this task costs time and requires excessive effort and can be inconsistent. Depending on the type of lesion and whether or not malignancy is present, additional diagnostic testing, such as imaging or biopsy, may be needed. Computer-aided diagnosis (CAD) systems, using clinical and dermoscopic images, could provide a quantitative assessment tool to help clinicians identify skin lesions and evaluate their severity. The recent progress in computer vision and deep learning has encouraged researchers to harness medical imaging data to develop powerful tools which could provide better diagnosis, treatment and prediction of skin conditions. By leveraging artificial intelligence techniques, including computer vision and deep learning, this work introduces intelligent computerised approaches using dermoscopic and clinical images to analyse and identify two types of skin lesions producing enhanced medical information. This thesis designed, realised, and evaluated the benefit of features learned automatically from images through the stacked layers of convolution filters in the convolutional neural network (CNN) models. The final objective of conducting the research in this thesis is to benefit patients with skin lesion condition assessment and skin cancer identification without adding to the already high medical costs. An automated regression-based method has been developed in this thesis for acne counting and severity grading from clinical facial images. In addition to the acne lesions, another type of skin lesion has been considered, represented by melanoma-related lesions. Two pipelines have been presented in this thesis to identify melanoma lesions. The first framework benchmarks and evaluates several CNN models for melanoma and non- melanoma classification from only dermoscopic images. While the second developed model for melanoma detection integrates the seven-point checklist scheme with CNN using both clinical and dermoscopic images. The experimental results of the work presented in this thesis manifest improved/ competitive performance compared to the state-of-the-art skin analysis methods using several evaluation metrics. The findings of the developed approaches demonstrated effective analysis of skin lesions with high accuracy, reducing the risk of misdiagnosis, and providing a more efficient means of detecting melanoma and automated acne lesion severity grading. Additionally, the application of computational intelligence allows for cost savings by reducing the need for manual analysis and enabling the automation of grading support, resulting in a more reliable and consistent process. Overall, the new automated methods based on computational intelligence demonstrate the benefits of developing computer vision and deep learning techniques for skin lesion analysis towards early skin cancer identification and cost-effective and robust grading support
    corecore