28 research outputs found

    RADON‐WAVELET BASED NOVEL IMAGE DESCRIPTOR FOR MAMMOGRAM MASS CLASSIFICATION

    Get PDF
    Mammography based breast cancer screening is very popular because of its lower costing and readily availability. For automated classification of mammogram images as benign or malignant machine learning techniques are involved. In this paper, a novel image descriptor which is based on the idea of Radon and Wavelet transform is proposed. This method is quite efficient as it performs well without any clinical information. Performance of the method is evaluated using six different classifiers namely: Bayesian network (BN), Linear discriminant analysis (LDA), Logistic, Support vector machine (SVM), Multilayer perceptron (MLP) and Random Forest (RF) to choose the best performer. Considering the present experimental framework, we found, in terms of area under the ROC curve (AUC), the proposed image descriptor outperforms, upto some extent, previous reported experiments using histogram based hand‐crafted methods, namely Histogram of Oriented Gradient (HOG) and Histogram of Gradient Divergence (HGD) and also Convolution Neural Network (CNN). Our experimental results show the highest AUC value of 0.986, when using only the carniocaudal (CC) view compared to when using only the mediolateral oblique (MLO) (0.738) or combining both views (0.838). These results thus proves the effectiveness of CC view over MLO for better mammogram mass classification

    Segmentation, Super-resolution and Fusion for Digital Mammogram Classification

    Get PDF
    Mammography is one of the most common and effective techniques used by radiologists for the early detection of breast cancer. Recently, computer-aided detection/diagnosis (CAD) has become a major research topic in medical imaging and has been widely applied in clinical situations. According to statics, early detection of cancer can reduce the mortality rates by 30% to 70%, therefore detection and diagnosis in the early stage are very important. CAD systems are designed primarily to assist radiologists in detecting and classifying abnormalities in medical scan images, but the main challenges hindering their wider deployment is the difficulty in achieving accuracy rates that help improve radiologists’ performance. The detection and diagnosis of breast cancer face two main issues: the accuracy of the CAD system, and the radiologists’ performance in reading and diagnosing mammograms. This thesis focused on the accuracy of CAD systems. In particular, we investigated two main steps of CAD systems; pre-processing (enhancement and segmentation), feature extraction and classification. Through this investigation, we make five main contributions to the field of automatic mammogram analysis. In automated mammogram analysis, image segmentation techniques are employed in breast boundary or region-of-interest (ROI) extraction. In most Medio-Lateral Oblique (MLO) views of mammograms, the pectoral muscle represents a predominant density region and it is important to detect and segment out this muscle region during pre-processing because it could be bias to the detection of breast cancer. An important reason for the breast border extraction is that it will limit the search-zone for abnormalities in the region of the breast without undue influence from the background of the mammogram. Therefore, we propose a new scheme for breast border extraction, artifact removal and removal of annotations, which are found in the background of mammograms. This was achieved using an local adaptive threshold that creates a binary mask for the images, followed by the use of morphological operations. Furthermore, an adaptive algorithm is proposed to detect and remove the pectoral muscle automatically. Feature extraction is another important step of any image-based pattern classification system. The performance of the corresponding classification depends very much on how well the extracted features represent the object of interest. We investigated a range of different texture feature sets such as Local Binary Pattern Histogram (LBPH), Histogram of Oriented Gradients (HOG) descriptor, and Gray Level Co-occurrence Matrix (GLCM). We propose the use of multi-scale features based on wavelet and local binary patterns for mammogram classification. We extract histograms of LBP codes from the original image as well as the wavelet sub-bands. Extracted features are combined into a single feature set. Experimental results show that our proposed method of combining LBPH features obtained from the original image and with LBPH features obtained from the wavelet domain increase the classification accuracy (sensitivity and specificity) when compared with LBPH extracted from the original image. The feature vector size could be large for some types of feature extraction schemes and they may contain redundant features that could have a negative effect on the performance of classification accuracy. Therefore, feature vector size reduction is needed to achieve higher accuracy as well as efficiency (processing and storage). We reduced the size of the features by applying principle component analysis (PCA) on the feature set and only chose a small number of eigen components to represent the features. Experimental results showed enhancement in the mammogram classification accuracy with a small set of features when compared with using original feature vector. Then we investigated and propose the use of the feature and decision fusion in mammogram classification. In feature-level fusion, two or more extracted feature sets of the same mammogram are concatenated into a single larger fused feature vector to represent the mammogram. Whereas in decision-level fusion, the results of individual classifiers based on distinct features extracted from the same mammogram are combined into a single decision. In this case the final decision is made by majority voting among the results of individual classifiers. Finally, we investigated the use of super resolution as a pre-processing step to enhance the mammograms prior to extracting features. From the preliminary experimental results we conclude that using enhanced mammograms have a positive effect on the performance of the system. Overall, our combination of proposals outperforms several existing schemes published in the literature

    Ensemble Boosted Tree based Mammogram image classification using Texture features and extracted smart features of Deep Neural Network

    Get PDF
    /n This work proposes a technique of breast cancer detection from mammogram images. It is a multistage process which classifies the mammogram images into benign or malignant category. During preprocessing, images of Mammographic Image Analysis Society (MIAS) database are passed through a couple of filters for noise removal, thresholding and cropping techniques to extract the region of interest, followed by augmentation process on database to enhance its size. Features from Deep Convolution Neural Network (DCNN) are merged with texture features to form final feature vector. Using transfer learning, deep features are extracted from a modified DCNN, whose training is performed on 69% of randomly selected images of database from both categories. Features of Grey Level Co-Occurrence Matrix (GLCM) and Local Binary Pattern (LBP) are merged to form texture features. Mean and variance of four parameters (contrast, correlation, homogeneity and entropy) of GLCM are computed in four angular directions, at ten distances. Ensemble Boosted Tree classifier using five-fold cross-validation mode, achieved an accuracy, sensitivity, specificity of 98.8%, 100% and 92.55% respectively on this feature vector

    Pixel N-grams for Mammographic Image Classification

    Get PDF
    X-ray screening for breast cancer is an important public health initiative in the management of a leading cause of death for women. However, screening is expensive if mammograms are required to be manually assessed by radiologists. Moreover, manual screening is subject to perception and interpretation errors. Computer aided detection/diagnosis (CAD) systems can help radiologists as computer algorithms are good at performing image analysis consistently and repetitively. However, image features that enhance CAD classification accuracies are necessary for CAD systems to be deployed. Many CAD systems have been developed but the specificity and sensitivity is not high; in part because of challenges inherent in identifying effective features to be initially extracted from raw images. Existing feature extraction techniques can be grouped under three main approaches; statistical, spectral and structural. Statistical and spectral techniques provide global image features but often fail to distinguish between local pattern variations within an image. On the other hand, structural approach have given rise to the Bag-of-Visual-Words (BoVW) model, which captures local variations in an image, but typically do not consider spatial relationships between the visual “words”. Moreover, statistical features and features based on BoVW models are computationally very expensive. Similarly, structural feature computation methods other than BoVW are also computationally expensive and strongly dependent upon algorithms that can segment an image to localize a region of interest likely to contain the tumour. Thus, classification algorithms using structural features require high resource computers. In order for a radiologist to classify the lesions on low resource computers such as Ipads, Tablets, and Mobile phones, in a remote location, it is necessary to develop computationally inexpensive classification algorithms. Therefore, the overarching aim of this research is to discover a feature extraction/image representation model which can be used to classify mammographic lesions with high accuracy, sensitivity and specificity along with low computational cost. For this purpose a novel feature extraction technique called ‘Pixel N-grams’ is proposed. The Pixel N-grams approach is inspired from the character N-gram concept in text categorization. Here, N number of consecutive pixel intensities are considered in a particular direction. The image is then represented with the help of histogram of occurrences of the Pixel N-grams in an image. Shape and texture of mammographic lesions play an important role in determining the malignancy of the lesion. It was hypothesized that the Pixel N-grams would be able to distinguish between various textures and shapes. Experiments carried out on benchmark texture databases and binary basic shapes database have demonstrated that the hypothesis was correct. Moreover, the Pixel N-grams were able to distinguish between various shapes irrespective of size and location of shape in an image. The efficacy of the Pixel N-gram technique was tested on mammographic database of primary digital mammograms sourced from a radiological facility in Australia (LakeImaging Pty Ltd) and secondary digital mammograms (benchmark miniMIAS database). A senior radiologist from LakeImaging provided real time de-identified high resolution mammogram images with annotated regions of interests (which were used as groundtruth), and valuable radiological diagnostic knowledge. Two types of classifications were observed on these two datasets. Normal/abnormal classification useful for automated screening and circumscribed/speculation/normal classification useful for automated diagnosis of breast cancer. The classification results on both the mammography datasets using Pixel N-grams were promising. Classification performance (Fscore, sensitivity and specificity) using Pixel N-gram technique was observed to be significantly better than the existing techniques such as intensity histogram, co-occurrence matrix based features and comparable with the BoVW features. Further, Pixel N-gram features are found to be computationally less complex than the co-occurrence matrix based features as well as BoVW features paving the way for mammogram classification on low resource computers. Although, the Pixel N-gram technique was designed for mammographic classification, it could be applied to other image classification applications such as diabetic retinopathy, histopathological image classification, lung tumour detection using CT images, brain tumour detection using MRI images, wound image classification and tooth decay classification using dentistry x-ray images. Further, texture and shape classification is also useful for classification of real world images outside the medical domain. Therefore, the pixel N-gram technique could be extended for applications such as classification of satellite imagery and other object detection tasks.Doctor of Philosoph

    Texture Feature Abstraction Based on Assessment of HOG and GLDM Features for Diagnosing Brain Abnormalities in MRI Images

    Get PDF
    Recognition of vehicles has always been a desired technology for curbing the crimes done with the help of vehicles Number imprinted on plates of cars and motorbikes are consist of numerals and alphabets and these plates can be easily recognized The uniqueness of combination of characters and numbers can be easily utilized for multiple purposes For instance fines can be imposed on people automatically for wrong parking toll fee can be automatically collected just by recognizing the number plate apart from these two there may be several numbers of uses can be accommodated Computer vision is comprehended as a sub space of the computerized reasoning furthermore software engineering fields Alternate ranges most firmly identified with computer vision are picture handling picture examination and machine vision As an exploratory order computer vision is apprehensive with the counterfeit frameworks that concentrate data from pictures and recordings The picture information can take numerous structures for instance segmentations of videos taken from several cameras This thesis presents a training based approach for the recognition of vehicle number plate The whole process has been divided into three stages i e capturing the image plate localization and recognition of digits over the plate The characteristics of HOG have been utilized for training and SVM has been used for adopted for classifying while recognizing This algorithm has been checked for more than 100 picture

    Combination of texture feature extraction and forward selection for one-class support vector machine improvement in self-portrait classification

    Get PDF
    This study aims to validate self-portraits using one-class support vector machine (OCSVM). To validate accurately, we build a model by combining texture feature extraction methods, Haralick and local binary pattern (LBP). We also reduce irrelevant features using forward selection (FS). OCSVM was selected because it can solve the problem caused by the inadequate variation of the negative class population. In OCSVM, we only need to feed the algorithm using the true class data, and the data with pattern that does not match will be classified as false. However, combining the two feature extractions produces many features, leading to the curse of dimensionality. The FS method is used to overcome this problem by selecting the best features. From the experiments carried out, the Haralick+LBP+FS+OCSVM model outperformed other models with an accuracy of 95.25% on validation data and 91.75% on test data

    TEXTURAL CLASSIFICATION OF MULTIPLE SCLEROSISLESIONS IN MULTIMODAL MRI VOLUMES

    Get PDF
    Background and objectives:Multiple Sclerosis is a common relapsing demyelinating diseasecausing the significant degradation of cognitive and motor skills and contributes towards areduced life expectancy of 5 to 10 years. The identification of Multiple Sclerosis Lesionsat early stages of a patient’s life can play a significant role in the diagnosis, treatment andprognosis for that individual. In recent years the process of disease detection has been aidedthrough the implementation of radiomic pipelines for texture extraction and classificationutilising Computer Vision and Machine Learning techniques. Eight Multiple Sclerosis Patient datasets have been supplied, each containing one standardclinical T2 MRI sequence and four diffusion-weighted sequences (T2, FA, ADC, AD, RD).This work proposes a Multimodal Multiple Sclerosis Lesion segmentation methodology util-ising supervised texture analysis, feature selection and classification. Three Machine Learningmodels were applied to Multimodal MRI data and tested using unseen patient datasets to eval-uate the classification performance of various extracted features, feature selection algorithmsand classifiers to MRI volumes uncommonly applied to MS Lesion detection. Method: First Order Statistics, Haralick Texture Features, Gray-Level Run-Lengths, His-togram of Oriented Gradients and Local Binary Patterns were extracted from MRI volumeswhich were minimally pre-processed using a skull stripping and background removal algorithm.mRMR and LASSO feature selection algorithms were applied to identify a subset of rankingsfor use in Machine Learning using Support Vector Machine, Random Forests and ExtremeLearning Machine classification. Results: ELM achieved a top slice classification accuracy of 85% while SVM achieved 79%and RF 78%. It was found that combining information from all MRI sequences increased theclassification performance when analysing unseen T2 scans in almost all cases. LASSO andmRMR feature selection methods failed to increase accuracy, and the highest-scoring groupof features were Haralick Texture Features, derived from Grey-Level Co-occurrence matrices
    corecore