2,193 research outputs found
Knowledge-aware Deep Framework for Collaborative Skin Lesion Segmentation and Melanoma Recognition
Deep learning techniques have shown their superior performance in
dermatologist clinical inspection. Nevertheless, melanoma diagnosis is still a
challenging task due to the difficulty of incorporating the useful
dermatologist clinical knowledge into the learning process. In this paper, we
propose a novel knowledge-aware deep framework that incorporates some clinical
knowledge into collaborative learning of two important melanoma diagnosis
tasks, i.e., skin lesion segmentation and melanoma recognition. Specifically,
to exploit the knowledge of morphological expressions of the lesion region and
also the periphery region for melanoma identification, a lesion-based pooling
and shape extraction (LPSE) scheme is designed, which transfers the structure
information obtained from skin lesion segmentation into melanoma recognition.
Meanwhile, to pass the skin lesion diagnosis knowledge from melanoma
recognition to skin lesion segmentation, an effective diagnosis guided feature
fusion (DGFF) strategy is designed. Moreover, we propose a recursive mutual
learning mechanism that further promotes the inter-task cooperation, and thus
iteratively improves the joint learning capability of the model for both skin
lesion segmentation and melanoma recognition. Experimental results on two
publicly available skin lesion datasets show the effectiveness of the proposed
method for melanoma analysis.Comment: Pattern Recognitio
A Survey on Deep Learning in Medical Image Analysis
Deep learning algorithms, in particular convolutional networks, have rapidly
become a methodology of choice for analyzing medical images. This paper reviews
the major deep learning concepts pertinent to medical image analysis and
summarizes over 300 contributions to the field, most of which appeared in the
last year. We survey the use of deep learning for image classification, object
detection, segmentation, registration, and other tasks and provide concise
overviews of studies per application area. Open challenges and directions for
future research are discussed.Comment: Revised survey includes expanded discussion section and reworked
introductory section on common deep architectures. Added missed papers from
before Feb 1st 201
Cancer diagnosis using deep learning: A bibliographic review
In this paper, we first describe the basics of the field of cancer diagnosis, which includes steps of cancer diagnosis followed by the typical classification methods used by doctors, providing a historical idea of cancer classification techniques to the readers. These methods include Asymmetry, Border, Color and Diameter (ABCD) method, seven-point detection method, Menzies method, and pattern analysis. They are used regularly by doctors for cancer diagnosis, although they are not considered very efficient for obtaining better performance. Moreover, considering all types of audience, the basic evaluation criteria are also discussed. The criteria include the receiver operating characteristic curve (ROC curve), Area under the ROC curve (AUC), F1 score, accuracy, specificity, sensitivity, precision, dice-coefficient, average accuracy, and Jaccard index. Previously used methods are considered inefficient, asking for better and smarter methods for cancer diagnosis. Artificial intelligence and cancer diagnosis are gaining attention as a way to define better diagnostic tools. In particular, deep neural networks can be successfully used for intelligent image analysis. The basic framework of how this machine learning works on medical imaging is provided in this study, i.e., pre-processing, image segmentation and post-processing. The second part of this manuscript describes the different deep learning techniques, such as convolutional neural networks (CNNs), generative adversarial models (GANs), deep autoencoders (DANs), restricted Boltzmann’s machine (RBM), stacked autoencoders (SAE), convolutional autoencoders (CAE), recurrent neural networks (RNNs), long short-term memory (LTSM), multi-scale convolutional neural network (M-CNN), multi-instance learning convolutional neural network (MIL-CNN). For each technique, we provide Python codes, to allow interested readers to experiment with the cited algorithms on their own diagnostic problems. The third part of this manuscript compiles the successfully applied deep learning models for different types of cancers. Considering the length of the manuscript, we restrict ourselves to the discussion of breast cancer, lung cancer, brain cancer, and skin cancer. The purpose of this bibliographic review is to provide researchers opting to work in implementing deep learning and artificial neural networks for cancer diagnosis a knowledge from scratch of the state-of-the-art achievements
A survey, review, and future trends of skin lesion segmentation and classification
The Computer-aided Diagnosis or Detection (CAD) approach for skin lesion analysis is an emerging field of research that has the potential to alleviate the burden and cost of skin cancer screening. Researchers have recently indicated increasing interest in developing such CAD systems, with the intention of providing a user-friendly tool to dermatologists to reduce the challenges encountered or associated with manual inspection. This article aims to provide a comprehensive literature survey and review of a total of 594 publications (356 for skin lesion segmentation and 238 for skin lesion classification) published between 2011 and 2022. These articles are analyzed and summarized in a number of different ways to contribute vital information regarding the methods for the development of CAD systems. These ways include: relevant and essential definitions and theories, input data (dataset utilization, preprocessing, augmentations, and fixing imbalance problems), method configuration (techniques, architectures, module frameworks, and losses), training tactics (hyperparameter settings), and evaluation criteria. We intend to investigate a variety of performance-enhancing approaches, including ensemble and post-processing. We also discuss these dimensions to reveal their current trends based on utilization frequencies. In addition, we highlight the primary difficulties associated with evaluating skin lesion segmentation and classification systems using minimal datasets, as well as the potential solutions to these difficulties. Findings, recommendations, and trends are disclosed to inform future research on developing an automated and robust CAD system for skin lesion analysis
HST-MRF: Heterogeneous Swin Transformer with Multi-Receptive Field for Medical Image Segmentation
The Transformer has been successfully used in medical image segmentation due
to its excellent long-range modeling capabilities. However, patch segmentation
is necessary when building a Transformer class model. This process may disrupt
the tissue structure in medical images, resulting in the loss of relevant
information. In this study, we proposed a Heterogeneous Swin Transformer with
Multi-Receptive Field (HST-MRF) model based on U-shaped networks for medical
image segmentation. The main purpose is to solve the problem of loss of
structural information caused by patch segmentation using transformer by fusing
patch information under different receptive fields. The heterogeneous Swin
Transformer (HST) is the core module, which achieves the interaction of
multi-receptive field patch information through heterogeneous attention and
passes it to the next stage for progressive learning. We also designed a
two-stage fusion module, multimodal bilinear pooling (MBP), to assist HST in
further fusing multi-receptive field information and combining low-level and
high-level semantic information for accurate localization of lesion regions. In
addition, we developed adaptive patch embedding (APE) and soft channel
attention (SCA) modules to retain more valuable information when acquiring
patch embedding and filtering channel features, respectively, thereby improving
model segmentation quality. We evaluated HST-MRF on multiple datasets for polyp
and skin lesion segmentation tasks. Experimental results show that our proposed
method outperforms state-of-the-art models and can achieve superior
performance. Furthermore, we verified the effectiveness of each module and the
benefits of multi-receptive field segmentation in reducing the loss of
structural information through ablation experiments
- …