122 research outputs found

    Pattern Recognition-Based Analysis of COPD in CT

    Get PDF

    An automated system for the classification and segmentation of brain tumours in MRI images based on the modified grey level co-occurrence matrix

    Get PDF
    The development of an automated system for the classification and segmentation of brain tumours in MRI scans remains challenging due to high variability and complexity of the brain tumours. Visual examination of MRI scans to diagnose brain tumours is the accepted standard. However due to the large number of MRI slices that are produced for each patient this is becoming a time consuming and slow process that is also prone to errors. This study explores an automated system for the classification and segmentation of brain tumours in MRI scans based on texture feature extraction. The research investigates an appropriate technique for feature extraction and development of a three-dimensional segmentation method. This was achieved by the investigation and integration of several image processing methods that are related to texture features and segmentation of MRI brain scans. First, the MRI brain scans were pre-processed by image enhancement, intensity normalization, background segmentation and correcting the mid-sagittal plane (MSP) of the brain for any possible skewness in the patient’s head. Second, the texture features were extracted using modified grey level co-occurrence matrix (MGLCM) from T2-weighted (T2-w) MRI slices and classified into normal and abnormal using multi-layer perceptron neural network (MLP). The texture feature extraction method starts from the standpoint that the human brain structure is approximately symmetric around the MSP of the brain. The extracted features measure the degree of symmetry between the left and right hemispheres of the brain, which are used to detect the abnormalities in the brain. This will enable clinicians to reject the MRI brain scans of the patients who have normal brain quickly and focusing on those who have pathological brain features. Finally, the bounding 3D-boxes based genetic algorithm (BBBGA) was used to identify the location of the brain tumour and segments it automatically by using three-dimensional active contour without edge (3DACWE) method. The research was validated using two datasets; a real dataset that was collected from the MRI Unit in Al-Kadhimiya Teaching Hospital in Iraq in 2014 and the standard benchmark multimodal brain tumour segmentation (BRATS 2013) dataset. The experimental results on both datasets proved that the efficacy of the proposed system in the successful classification and segmentation of the brain tumours in MRI scans. The achieved classification accuracies were 97.8% for the collected dataset and 98.6% for the standard dataset. While the segmentation’s Dice scores were 89% for the collected dataset and 89.3% for the standard dataset

    Multifractal techniques for analysis and classification of emphysema images

    Get PDF
    This thesis proposes, develops and evaluates different multifractal methods for detection, segmentation and classification of medical images. This is achieved by studying the structures of the image and extracting the statistical self-similarity measures characterized by the Holder exponent, and using them to develop texture features for segmentation and classification. The theoretical framework for fulfilling these goals is based on the efficient computation of fractal dimension, which has been explored and extended in this work. This thesis investigates different ways of computing the fractal dimension of digital images and validates the accuracy of each method with fractal images with predefined fractal dimension. The box counting and the Higuchi methods are used for the estimation of fractal dimensions. A prototype system of the Higuchi fractal dimension of the computed tomography (CT) image is used to identify and detect some of the regions of the image with the presence of emphysema. The box counting method is also used for the development of the multifractal spectrum and applied to detect and identify the emphysema patterns. We propose a multifractal based approach for the classification of emphysema patterns by calculating the local singularity coefficients of an image using four multifractal intensity measures. One of the primary statistical measures of self-similarity used in the processing of tissue images is the Holder exponent (α-value) that represents the power law, which the intensity distribution satisfies in the local pixel neighbourhoods. The fractal dimension corresponding to each α-value gives a multifractal spectrum f(α) that was used as a feature descriptor for classification. A feature selection technique is introduced and implemented to extract some of the important features that could increase the discriminating capability of the descriptors and generate the maximum classification accuracy of the emphysema patterns. We propose to further improve the classification accuracy of emphysema CT patterns by combining the features extracted from the alpha-histograms and the multifractal descriptors to generate a new descriptor. The performances of the classifiers are measured by using the error matrix and the area under the receiver operating characteristic curve (AUC). The results at this stage demonstrated the proposed cascaded approach significantly improves the classification accuracy. Another multifractal based approach using a direct determination approach is investigated to demonstrate how multifractal characteristic parameters could be used for the identification of emphysema patterns in HRCT images. This further analysis reveals the multi-scale structures and characteristic properties of the emphysema images through the generalized dimensions. The results obtained confirm that this approach can also be effectively used for detecting and identifying emphysema patterns in CT images. Two new descriptors are proposed for accurate classification of emphysema patterns by hybrid concatenation of the local features extracted from the local binary patterns (LBP) and the global features obtained from the multifractal images. The proposed combined feature descriptors of the LBP and f(α) produced a very good performance with an overall classification accuracy of 98%. These performances outperform other state-of-the-art methods for emphysema pattern classification and demonstrate the discriminating power and robustness of the combined features for accurate classification of emphysema CT images. Overall, experimental results have shown that the multifractal could be effectively used for the classifications and detections of emphysema patterns in HRCT images

    Computational methods for the analysis of functional 4D-CT chest images.

    Get PDF
    Medical imaging is an important emerging technology that has been intensively used in the last few decades for disease diagnosis and monitoring as well as for the assessment of treatment effectiveness. Medical images provide a very large amount of valuable information that is too huge to be exploited by radiologists and physicians. Therefore, the design of computer-aided diagnostic (CAD) system, which can be used as an assistive tool for the medical community, is of a great importance. This dissertation deals with the development of a complete CAD system for lung cancer patients, which remains the leading cause of cancer-related death in the USA. In 2014, there were approximately 224,210 new cases of lung cancer and 159,260 related deaths. The process begins with the detection of lung cancer which is detected through the diagnosis of lung nodules (a manifestation of lung cancer). These nodules are approximately spherical regions of primarily high density tissue that are visible in computed tomography (CT) images of the lung. The treatment of these lung cancer nodules is complex, nearly 70% of lung cancer patients require radiation therapy as part of their treatment. Radiation-induced lung injury is a limiting toxicity that may decrease cure rates and increase morbidity and mortality treatment. By finding ways to accurately detect, at early stage, and hence prevent lung injury, it will have significant positive consequences for lung cancer patients. The ultimate goal of this dissertation is to develop a clinically usable CAD system that can improve the sensitivity and specificity of early detection of radiation-induced lung injury based on the hypotheses that radiated lung tissues may get affected and suffer decrease of their functionality as a side effect of radiation therapy treatment. These hypotheses have been validated by demonstrating that automatic segmentation of the lung regions and registration of consecutive respiratory phases to estimate their elasticity, ventilation, and texture features to provide discriminatory descriptors that can be used for early detection of radiation-induced lung injury. The proposed methodologies will lead to novel indexes for distinguishing normal/healthy and injured lung tissues in clinical decision-making. To achieve this goal, a CAD system for accurate detection of radiation-induced lung injury that requires three basic components has been developed. These components are the lung fields segmentation, lung registration, and features extraction and tissue classification. This dissertation starts with an exploration of the available medical imaging modalities to present the importance of medical imaging in today’s clinical applications. Secondly, the methodologies, challenges, and limitations of recent CAD systems for lung cancer detection are covered. This is followed by introducing an accurate segmentation methodology of the lung parenchyma with the focus of pathological lungs to extract the volume of interest (VOI) to be analyzed for potential existence of lung injuries stemmed from the radiation therapy. After the segmentation of the VOI, a lung registration framework is introduced to perform a crucial and important step that ensures the co-alignment of the intra-patient scans. This step eliminates the effects of orientation differences, motion, breathing, heart beats, and differences in scanning parameters to be able to accurately extract the functionality features for the lung fields. The developed registration framework also helps in the evaluation and gated control of the radiotherapy through the motion estimation analysis before and after the therapy dose. Finally, the radiation-induced lung injury is introduced, which combines the previous two medical image processing and analysis steps with the features estimation and classification step. This framework estimates and combines both texture and functional features. The texture features are modeled using the novel 7th-order Markov Gibbs random field (MGRF) model that has the ability to accurately models the texture of healthy and injured lung tissues through simultaneously accounting for both vertical and horizontal relative dependencies between voxel-wise signals. While the functionality features calculations are based on the calculated deformation fields, obtained from the 4D-CT lung registration, that maps lung voxels between successive CT scans in the respiratory cycle. These functionality features describe the ventilation, the air flow rate, of the lung tissues using the Jacobian of the deformation field and the tissues’ elasticity using the strain components calculated from the gradient of the deformation field. Finally, these features are combined in the classification model to detect the injured parts of the lung at an early stage and enables an earlier intervention

    Machine Intelligence for Advanced Medical Data Analysis: Manifold Learning Approach

    Get PDF
    In the current work, linear and non-linear manifold learning techniques, specifically Principle Component Analysis (PCA) and Laplacian Eigenmaps, are studied in detail. Their applications in medical image and shape analysis are investigated. In the first contribution, a manifold learning-based multi-modal image registration technique is developed, which results in a unified intensity system through intensity transformation between the reference and sensed images. The transformation eliminates intensity variations in multi-modal medical scans and hence facilitates employing well-studied mono-modal registration techniques. The method can be used for registering multi-modal images with full and partial data. Next, a manifold learning-based scale invariant global shape descriptor is introduced. The proposed descriptor benefits from the capability of Laplacian Eigenmap in dealing with high dimensional data by introducing an exponential weighting scheme. It eliminates the limitations tied to the well-known cotangent weighting scheme, namely dependency on triangular mesh representation and high intra-class quality of 3D models. In the end, a novel descriptive model for diagnostic classification of pulmonary nodules is presented. The descriptive model benefits from structural differences between benign and malignant nodules for automatic and accurate prediction of a candidate nodule. It extracts concise and discriminative features automatically from the 3D surface structure of a nodule using spectral features studied in the previous work combined with a point cloud-based deep learning network. Extensive experiments have been conducted and have shown that the proposed algorithms based on manifold learning outperform several state-of-the-art methods. Advanced computational techniques with a combination of manifold learning and deep networks can play a vital role in effective healthcare delivery by providing a framework for several fundamental tasks in image and shape processing, namely, registration, classification, and detection of features of interest

    Advances in video motion analysis research for mature and emerging application areas

    Get PDF

    Investigation of intra-tumour heterogeneity to identify texture features to characterise and quantify neoplastic lesions on imaging

    Get PDF
    The aim of this work was to further our knowledge of using imaging data to discover image derived biomarkers and other information about the imaged tumour. Using scans obtained from multiple centres to discover and validate the models has advanced earlier research and provided a platform for further larger centre prospective studies. This work consists of two major studies which are describe separately: STUDY 1: NSCLC Purpose The aim of this multi-center study was to discover and validate radiomics classifiers as image-derived biomarkers for risk stratification of non-small-cell lung cancer (NSCLC). Patients and methods Pre-therapy PET scans from 358 Stage I–III NSCLC patients scheduled for radical radiotherapy/chemoradiotherapy acquired between October 2008 and December 2013 were included in this seven-institution study. Using a semiautomatic threshold method to segment the primary tumors, radiomics predictive classifiers were derived from a training set of 133 scans using TexLAB v2. Least absolute shrinkage and selection operator (LASSO) regression analysis allowed data dimension reduction and radiomics feature vector (FV) discovery. Multivariable analysis was performed to establish the relationship between FV, stage and overall survival (OS). Performance of the optimal FV was tested in an independent validation set of 204 patients, and a further independent set of 21 (TESTI) patients. Results Of 358 patients, 249 died within the follow-up period [median 22 (range 0–85) months]. From each primary tumor, 665 three-dimensional radiomics features from each of seven gray levels were extracted. The most predictive feature vector discovered (FVX) was independent of known prognostic factors, such as stage and tumor volume, and of interest to multi-center studies, invariant to the type of PET/CT manufacturer. Using the median cut-off, FVX predicted a 14-month survival difference in the validation cohort (N = 204, p = 0.00465; HR = 1.61, 95% CI 1.16–2.24). In the TESTI cohort, a smaller cohort that presented with unusually poor survival of stage I cancers, FVX correctly indicated a lack of survival difference (N = 21, p = 0.501). In contrast to the radiomics classifier, clinically routine PET variables including SUVmax, SUVmean and SUVpeak lacked any prognostic information. Conclusion PET-based radiomics classifiers derived from routine pre-treatment imaging possess intrinsic prognostic information for risk stratification of NSCLC patients to radiotherapy/chemo-radiotherapy. STUDY 2: Ovarian Cancer Purpose The 5-year survival of epithelial ovarian cancer is approximately 35-40%, prompting the need to develop additional methods such as biomarkers for personalised treatment. Patient and Methods 657 texture features were extracted from the CT scans of 364 untreated EOC patients. A 4-texture feature ‘Radiomic Prognostic Vector (RPV)’ was developed using machine learning methods on the training set. Results The RPV was able to identify the 5% of patients with the worst prognosis, significantly improving established prognostic methods and was further validated in two independent, multi-centre cohorts. In addition, the genetic, transcriptomic and proteomic analysis from two independent datasets demonstrated that stromal and DNA damage response pathways are activated in RPV-stratified tumours. Conclusion RPV could be used to guide personalised therapy of EOC. Overall, the two large datasets of different imaging modalities have increased our knowledge of texture analysis, improving the models currently available and provided us with more areas with which to implement these tools in the clinical setting.Open Acces

    Classification of Medical Data Based On Sparse Representation Using Dictionary Learning

    Get PDF
    Due to the increase in the sources of image acquisition and storage capacity, the search for relevant information in large medical image databases has become more challenging. Classification of medical data into different categories is an important task, and enables efficient cataloging and retrieval with large image collections. The medical image classification systems available today classify medical images based on modality, body part, disease or orientation. Recent work in this direction seek to use the semantics of medical data to achieve better classification. However, representation of semantics is a challenging task and sparse representation has been explored in this thesis for this task

    Texture and Colour in Image Analysis

    Get PDF
    Research in colour and texture has experienced major changes in the last few years. This book presents some recent advances in the field, specifically in the theory and applications of colour texture analysis. This volume also features benchmarks, comparative evaluations and reviews
    corecore