20 research outputs found

    Ensemble Learning of Tissue Components for Prostate Histopathology Image Grading

    Get PDF
    Ensemble learning is an effective machine learning approach to improve the prediction performance by fusing several single classifier models. In computer-aided diagnosis system (CAD), machine learning has become one of the dominant solutions for tissue images diagnosis and grading. One problem in a single classifier model for multi-components of the tissue images combination to construct dense feature vectors is the overfitting. In this paper, an ensemble learning for multi-component tissue images classification approach is proposed. The prostate cancer Hematoxylin and Eosin (H&E) histopathology images from HUKM were used to test the proposed ensemble approach for diagnosing and Gleason grading. The experiments results of several prostate classification tasks, namely, benign vs. Grade 3, benign vs.Grade4, and Grade 3vs.Grade 4 show that the proposed ensemble significantly outperforms the previous typical CAD and the naïve approach that combines the texture features of all tissue component directly in dense feature vectors for a classifier

    A hierarchical classifier for multiclass prostate histopathology image gleason grading

    Get PDF
    Automated classification of prostate histopathology images includes the identification of multiple classes, such as benign and cancerous (grades 3 & 4).To address the multiclass classification problem in prostate histopathology images, breakdown approaches are utilized, such as one-versus-one (OVO) and one- versus-all (Ovall). In these approaches, the multiclass problem is decomposed into numerous binary subtasks, which are separately addressed.However, OVALL introduces an artificial class imbalance, which degrades the classification performance, while in the case of OVO, the correlation between different classes not regarded as a multiclass problem is broken into multiple independent binary problems. This paper proposes a new multiclass approach called multi-level (hierarchical) learning architecture (MLA). It addresses the binary classification tasks within the framework of a hierarchical strategy. It does so by accounting for the interaction between several classes and the domain knowledge. The proposed approach relies on the ‘divide-and-conquer’ principle by allocating each binary task into two separate subtasks; strong and weak, based on the power of the samples in each binary task. Conversely, the strong samples include more information about the considered task, which motivates the production of the final prediction. Experimental results on prostate histopathological images illustrated that the MLA significantly outperforms the Ovall and OVO approaches when applied to the ensemble framework.The results also confirmed the high efficiency of the ensemble framework with the MLA scheme in dealing with the multiclass classification problem

    Computer-Aided Cancer Diagnosis and Grading via Sparse Directional Image Representations

    Get PDF
    Prostate cancer and breast cancer are the second cause of death among cancers in males and females, respectively. If not diagnosed, prostate and breast cancers can spread and metastasize to other organs and bones and make it impossible for treatment. Hence, early diagnosis of cancer is vital for patient survival. Histopathological evaluation of the tissue is used for cancer diagnosis. The tissue is taken during biopsies and stained using hematoxylin and eosin (H&E) stain. Then a pathologist looks for abnormal changes in the tissue to diagnose and grade the cancer. This process can be time-consuming and subjective. A reliable and repetitive automatic cancer diagnosis method can greatly reduce the time while producing more reliable results. The scope of this dissertation is developing computer vision and machine learning algorithms for automatic cancer diagnosis and grading methods with accuracy acceptable by the expert pathologists. Automatic image classification relies on feature representation methods. In this dissertation we developed methods utilizing sparse directional multiscale transforms - specifically shearlet transform - for medical image analysis. We particularly designed theses computer visions-based algorithms and methods to work with H&E images and MRI images. Traditional signal processing methods (e.g. Fourier transform, wavelet transform, etc.) are not suitable for detecting carcinoma cells due to their lack of directional sensitivity. However, shearlet transform has inherent directional sensitivity and multiscale framework that enables it to detect different edges in the tissue images. We developed techniques for extracting holistic and local texture features from the histological and MRI images using histogram and co-occurrence of shearlet coefficients, respectively. Then we combined these features with the color and morphological features using multiple kernel learning (MKL) algorithm and employed support vector machines (SVM) with MKL to classify the medical images. We further investigated the impact of deep neural networks in representing the medical images for cancer detection. The aforementioned engineered features have a few limitations. They lack generalizability due to being tailored to the specific texture and structure of the tissues. They are time-consuming and expensive and need prepossessing and sometimes it is difficult to extract discriminative features from the images. On the other hand, feature learning techniques use multiple processing layers and learn feature representations directly from the data. To address these issues, we have developed a deep neural network containing multiple layers of convolution, max-pooling, and fully connected layers, trained on the Red, Green, and Blue (RGB) images along with the magnitude and phase of shearlet coefficients. Then we developed a weighted decision fusion deep neural network that assigns weights on the output probabilities and update those weights via backpropagation. The final decision was a weighted sum of the decisions from the RGB, and the magnitude and the phase of shearlet networks. We used the trained networks for classification of benign and malignant H&E images and Gleason grading. Our experimental results show that our proposed methods based on feature engineering and feature learning outperform the state-of-the-art and are even near perfect (100%) for some databases in terms of classification accuracy, sensitivity, specificity, F1 score, and area under the curve (AUC) and hence are promising computer-based methods for cancer diagnosis and grading using images

    Curvelet-Based Texture Classification in Computerized Critical Gleason Grading of Prostate Cancer Histological Images

    Get PDF
    Classical multi-resolution image processing using wavelets provides an efficient analysis of image characteristics represented in terms of pixel-based singularities such as connected edge pixels of objects and texture elements given by the pixel intensity statistics. Curvelet transform is a recently developed approach based on curved singularities that provides a more sparse representation for a variety of directional multi-resolution image processing tasks such as denoising and texture analysis. The objective of this research is to develop a multi-class classifier for the automated classification of Gleason patterns of prostate cancer histological images with the utilization of curvelet-based texture analysis. This problem of computer-aided recognition of four pattern classes between Gleason Score 6 (primary Gleason grade 3 plus secondary Gleason grade 3) and Gleason Score 8 (both primary and secondary grades 4) is of critical importance affecting treatment decision and patients’ quality of life. Multiple spatial sampling within each histological image is examined through the curvelet transform, the significant curvelet coefficient at each location of an image patch is obtained by maximization with respect to all curvelet orientations at a given location which represents the apparent curved-based singularity such as a short edge segment in the image structure. This sparser representation reduces greatly the redundancy in the original set of curvelet coefficients. The statistical textural features are extracted from these curvelet coefficients at multiple scales. We have designed a 2-level 4-class classification scheme, attempting to mimic the human expert’s decision process. It consists of two Gaussian kernel support vector machines, one support vector machine in each level and each is incorporated with a voting mechanism from classifications of multiple windowed patches in an image to reach the final decision for the image. At level 1, the support vector machine with voting is trained to ascertain the classification of Gleason grade 3 and grade 4, thus Gleason score 6 and score 8, by unanimous votes to one of the two classes, while the mixture voting inside the margin between decision boundaries will be assigned to the third class for consideration at level 2. The support vector machine in level 2 with supplemental features is trained to classify an image patch to Gleason grade 3+4 or 4+3 and the majority decision from multiple patches to consolidate the two-class discrimination of the image within Gleason score 7, or else, assign to an Indecision category. The developed tree classifier with voting from sampled image patches is distinct from the traditional voting by multiple machines. With a database of TMA prostate histological images from Urology/Pathology Laboratory of the Johns Hopkins Medical Center, the classifier using curvelet-based statistical texture features for recognition of 4-class critical Gleason scores was successfully trained and tested achieving a remarkable performance with 97.91% overall 4-class validation accuracy and 95.83% testing accuracy. This lends to an expectation of more testing and further improvement toward a plausible practical implementation

    Automated classification of cancer tissues using multispectral imagery

    Get PDF
    Automated classification of medical images for colorectal and prostate cancer diagnosis is a crucial tool for improving routine diagnosis decisions. Therefore, in the last few decades, there has been an increasing interest in refining and adapting machine learning algorithms to classify microscopic images of tumour biopsies. Recently, multispectral imagery has received a significant interest from the research community due to the fast-growing development of high-performance computers. This thesis investigates novel algorithms for automatic classification of colorectal and prostate cancer using multispectral imagery in order to propose a system outperforming the state-of-the-art techniques in the field. To achieve this objective, several feature extraction methods based on image texture have been investigated, analysed and evaluated. A novel texture feature for multispectral images is also constructed as an adaptation of the local binary pattern texture feature to multispectral images by expanding the pixels neighbourhood to the spectral dimension. It has the advantage of capturing the multispectral information with a limited feature vector size. This feature has demonstrated improved classification results when compared against traditional texture features. In order to further enhance the systems performance, advanced classification schemes such as bag-of-features - to better capture local information - and stacked generalisation - to select the most discriminative texture features - are explored and evaluated. Finally, the recent years have seen an accelerated and exponential rise of deep learning, boosted by the advances in hardware, and more specifically graphics processing units. Such models have demonstrated excellent results for supervised learning in multiple applications. This observation has motivated the employment in this thesis of deep neural network architectures, namely convolutional neural networks. Experiments were also carried out to evaluate and compare the performance obtained with the features extracted using convolutional neural networks with random initialisation against features extracted with pre-trained models on ImageNet dataset. The analysis of the classication accuracy achieved with deep learning models reveals that the latter outperforms the previously proposed texture extraction methods. In this thesis, the algorithms are assessed using two separate multiclass datasets: the first one consists of prostate tumour multispectral images, and the second contains multispectral images of colorectal tumours. The colorectal dataset was acquired on a wide domain of the light spectrum ranging from the visible to the infrared wavelengths. This dataset was used to demonstrate the improved results produced using infrared light as well as visible light
    corecore