6 research outputs found

    Breast cancer detection using infrared thermal imaging and a deep learning model

    Get PDF
    Women’s breasts are susceptible to developing cancer; this is supported by a recent study from 2016 showing that 2.8 million women worldwide had already been diagnosed with breast cancer that year. The medical care of a patient with breast cancer is costly and, given the cost and value of the preservation of the health of the citizen, the prevention of breast cancer has become a priority in public health. Over the past 20 years several techniques have been proposed for this purpose, such as mammography, which is frequently used for breast cancer diagnosis. However, false positives of mammography can occur in which the patient is diagnosed positive by another technique. Additionally, the potential side effects of using mammography may encourage patients and physicians to look for other diagnostic techniques. Our review of the literature first explored infrared digital imaging, which assumes that a basic thermal comparison between a healthy breast and a breast with cancer always shows an increase in thermal activity in the precancerous tissues and the areas surrounding developing breast cancer. Furthermore, through our research, we realized that a Computer-Aided Diagnostic (CAD) undertaken through infrared image processing could not be achieved without a model such as the well-known hemispheric model. The novel contribution of this paper is the production of a comparative study of several breast cancer detection techniques using powerful computer vision techniques and deep learning models

    Breast pectoral muscle segmentation in mammograms using a modified holistically-nested edge detection network

    Get PDF
    This paper presents a method for automatic breast pectoral muscle segmentation in mediolateral oblique mammograms using a Convolutional Neural Network (CNN) inspired by the Holistically-nested Edge Detection (HED) network. Most of the existing methods in the literature are based on hand-crafted models such as straight-line, curve-based techniques or a combination of both. Unfortunately, such models are insufficient when dealing with complex shape variations of the pectoral muscle boundary and when the boundary is unclear due to overlapping breast tissue. To compensate for these issues, we propose a neural network framework that incorporates multi-scale and multi-level learning, capable of learning complex hierarchical features to resolve spatial ambiguity in estimating the pectoral muscle boundary. For this purpose, we modified the HED network architecture to specifically find ‘contour-like’ objects in mammograms. The proposed framework produced a probability map that can be used to estimate the initial pectoral muscle boundary. Subsequently, we process these maps by extracting morphological properties to find the actual pectoral muscle boundary. Finally, we developed two different post-processing steps to find the actual pectoral muscle boundary. Quantitative evaluation results show that the proposed method is comparable with alternative state-of-the-art methods producing on average values of 94.8 ± 8.5% and 97.5 ± 6.3% for the Jaccard and Dice similarity metrics, respectively, across four different databases

    Improved Texture Feature Extraction and Selection Methods for Image Classification Applications

    Get PDF
    Classification is an important process in image processing applications, and image texture is the preferable source of information in images classification, especially in the context of real-world applications. However, the output of a typical texture feature descriptor often does not represent a wide range of different texture characteristics. Many research studies have contributed different descriptors to improve the extraction of features from texture. Among the various descriptors, the Local Binary Patterns (LBP) descriptor produces powerful information from texture by simple comparison between a central pixel and its neighbour pixels. In addition, to obtain sufficient information from texture, many research studies have proposed solutions based on combining complementary features together. Although feature-level fusion produces satisfactory results for certain applications, it suffers from an inherent and well-known problem called “the curse of dimensionality’’. Feature selection deals with this problem effectively by reducing the feature dimensions and selecting only the relevant features. However, large feature spaces often make the process of seeking optimum features complicated. This research introduces improved feature extraction methods by adopting a new approach based on new texture descriptors called Local Zone Binary Patterns (LZBP) and Local Multiple Patterns (LMP), which are both based on the LBP descriptor. The produced feature descriptors are combined with other complementary features to yield a unified vector. Furthermore, the combined features are processed by a new hybrid selection approach based on the Artificial Bee Colony and Neighbourhood Rough Set (ABC-NRS) to efficiently reduce the dimensionality of the resulting features from the feature fusion stage. Comprehensive experimental testing and evaluation is carried out for different components of the proposed approach, and the novelty and limitation of the proposed approach have been demonstrated. The results of the evaluation prove the ability of the LZBP and LMP texture descriptors in improving feature extraction compared to the conventional LBP descriptor. In addition, the use of the hybrid ABC-NRS selection method on the proposed combined features is shown to improve the classification performance while achieving the shortest feature length. The overall proposed approach is demonstrated to provide improved texture-based image classification performance compared to previous methods using benchmarks based on outdoor scene images. These research contributions thus represent significant advances in the field of texture-based image classification
    corecore