4 research outputs found

    Multi-fractal dimension features by enhancing and segmenting mammogram images of breast cancer

    Get PDF
    The common malignancy which causes deaths in women is breast cancer. Early detection of breast cancer using mammographic image can help in reducing the mortality rate and the probability of recurrence. Through mammographic examination, breast lesions can be detected and classified. Breast lesions can be detected using many popular tools such as Magnetic Resonance Imaging (MRI), ultrasonography, and mammography. Although mammography is very useful in the diagnosis of breast cancer, the pattern similarities between normal and pathologic cases makes the process of diagnosis difficult. Therefore, in this thesis Computer Aided Diagnosing (CAD) systems have been developed to help doctors and technicians in detecting lesions. The thesis aims to increase the accuracy of diagnosing breast cancer for optimal classification of cancer. It is achieved using Machine Learning (ML) and image processing techniques on mammogram images. This thesis also proposes an improvement of an automated extraction of powerful texture sign for classification by enhancing and segmenting the breast cancer mammogram images. The proposed CAD system consists of five stages namely pre-processing, segmentation, feature extraction, feature selection, and classification. First stage is pre-processing that is used for noise reduction due to noises in mammogram image. Therefore, based on the frequency domain this thesis employed wavelet transform to enhance mammogram images in pre-processing stage for two purposes which is to highlight the border of mammogram images for segmentation stage, and to enhance the region of interest (ROI) using adaptive threshold in the mammogram images for feature extraction purpose. Second stage is segmentation process to identify ROI in mammogram images. It is a difficult task because of several landmarks such as breast boundary and artifacts as well as pectoral muscle in Medio-Lateral Oblique (MLO). Thus, this thesis presents an automatic segmentation algorithm based on new thresholding combined with image processing techniques. Experimental results demonstrate that the proposed model increases segmentation accuracy of the ROI from breast background, landmarks, and pectoral muscle. Third stage is feature extraction where enhancement model based on fractal dimension is proposed to derive significant mammogram image texture features. Based on the proposed, model a powerful texture sign for classification are extracted. Fourth stage is feature selection where Genetic Algorithm (GA) technique has been used as a feature selection technique to select the important features. In last classification stage, Artificial Neural Network (ANN) technique has been used to differentiate between Benign and Malignant classes of cancer using the most relevant texture feature. As a conclusion, classification accuracy, sensitivity, and specificity obtained by the proposed CAD system are improved in comparison to previous studies. This thesis has practical contribution in identification of breast cancer using mammogram images and better classification accuracy of benign and malign lesions using ML and image processing techniques

    Radon Projections as Image Descriptors for Content-Based Retrieval of Medical Images

    Get PDF
    Clinical analysis and medical diagnosis of diverse diseases adopt medical imaging techniques to empower specialists to perform their tasks by visualizing internal body organs and tissues for classifying and treating diseases at an early stage. Content-Based Image Retrieval (CBIR) systems are a set of computer vision techniques to retrieve similar images from a large database based on proper image representations. Particularly in radiology and histopathology, CBIR is a promising approach to effectively screen, understand, and retrieve images with similar level of semantic descriptions from a database of previously diagnosed cases to provide physicians with reliable assistance for diagnosis, treatment planning and research. Over the past decade, the development of CBIR systems in medical imaging has expedited due to the increase in digitized modalities, an increase in computational efficiency (e.g., availability of GPUs), and progress in algorithm development in computer vision and artificial intelligence. Hence, medical specialists may use CBIR prototypes to query similar cases from a large image database based solely on the image content (and no text). Understanding the semantics of an image requires an expressive descriptor that has the ability to capture and to represent unique and invariant features of an image. Radon transform, one of the oldest techniques widely used in medical imaging, can capture the shape of organs in form of a one-dimensional histogram by projecting parallel rays through a two-dimensional object of concern at a specific angle. In this work, the Radon transform is re-designed to (i) extract features and (ii) generate a descriptor for content-based retrieval of medical images. Radon transform is applied to feed a deep neural network instead of raw images in order to improve the generalization of the network. Specifically, the framework is composed of providing Radon projections of an image to a deep autoencoder, from which the deepest layer is isolated and fed into a multi-layer perceptron for classification. This approach enables the network to (a) train much faster as the Radon projections are computationally inexpensive compared to raw input images, and (b) perform more accurately as Radon projections can make more pronounced and salient features to the network compared to raw images. This framework is validated on a publicly available radiography data set called "Image Retrieval in Medical Applications" (IRMA), consisting of 12,677 train and 1,733 test images, for which an classification accuracy of approximately 82% is achieved, outperforming all autoencoder strategies reported on the Image Retrieval in Medical Applications (IRMA) dataset. The classification accuracy is calculated by dividing the total IRMA error, a calculation outlined by the authors of the data set, with the total number of test images. Finally, a compact handcrafted image descriptor based on Radon transform was designed in this work that is called "Forming Local Intersections of Projections" (FLIP). The FLIP descriptor has been designed, through numerous experiments, for representing histopathology images. The FLIP descriptor is based on Radon transform wherein parallel projections are applied in a local 3x3 neighborhoods with 2 pixel overlap of gray-level images (staining of histopathology images is ignored). Using four equidistant projection directions in each window, the characteristics of the neighborhood is quantified by taking an element-wise minimum between each adjacent projection in each window. Thereafter, the FLIP histogram (descriptor) for each image is constructed. A multi-resolution FLIP (mFLIP) scheme is also proposed which is observed to outperform many state-of-the-art methods, among others deep features, when applied on the histopathology data set KIMIA Path24. Experiments show a total classification accuracy of approximately 72% using SVM classification, which surpasses the current benchmark of approximately 66% on the KIMIA Path24 data set
    corecore