5 research outputs found

    Local object patterns for tissue image representation and cancer classification

    Get PDF
    Ankara : The Department of Computer Engineering and the Graduate School of Engineering and Science of Bilkent Univ., 2013.Thesis (Master's) -- Bilkent University, 2013.Includes bibliographical refences.Histopathological examination of a tissue is the routine practice for diagnosis and grading of cancer. However, this examination is subjective since it requires visual interpretation of a pathologist, which mainly depends on his/her experience and expertise. In order to minimize the subjectivity level, it has been proposed to use automated cancer diagnosis and grading systems that represent a tissue image with quantitative features and use these features for classifying and grading the tissue. In this thesis, we present a new approach for effective representation and classification of histopathological tissue images. In this approach, we propose to decompose a tissue image into its histological components and introduce a set of new texture descriptors, which we call local object patterns, on these components to model their composition within a tissue. We define these descriptors using the idea of local binary patterns. However, we define our local object pattern descriptors at the component-level to quantify a component, as opposed to pixel-level local binary patterns, which quantify a pixel by constructing a binary string based on relative intensities of its neighbors. To this end, we specify neighborhoods with different locality ranges and encode spatial arrangements of the components within the specified local neighborhoods by generating strings. We then extract our texture descriptors from these strings to characterize histological components and construct the bag-of-words representation of an image from the characterized components. In this thesis, we use two approaches for the selection of the components: The first approach uses all components to construct a bag-ofwords representation whereas the second one uses graph walking to select multiple subsets of the components and constructs multiple bag-of-words representations from these subsets. Working with microscopic images of histopathological colon tissues, our experiments show that the proposed component-level texture descriptors lead to higher classification accuracies than the previous textural approaches.Olgun, GüldenM.S

    Investigation of intra-tumour heterogeneity to identify texture features to characterise and quantify neoplastic lesions on imaging

    Get PDF
    The aim of this work was to further our knowledge of using imaging data to discover image derived biomarkers and other information about the imaged tumour. Using scans obtained from multiple centres to discover and validate the models has advanced earlier research and provided a platform for further larger centre prospective studies. This work consists of two major studies which are describe separately: STUDY 1: NSCLC Purpose The aim of this multi-center study was to discover and validate radiomics classifiers as image-derived biomarkers for risk stratification of non-small-cell lung cancer (NSCLC). Patients and methods Pre-therapy PET scans from 358 Stage I–III NSCLC patients scheduled for radical radiotherapy/chemoradiotherapy acquired between October 2008 and December 2013 were included in this seven-institution study. Using a semiautomatic threshold method to segment the primary tumors, radiomics predictive classifiers were derived from a training set of 133 scans using TexLAB v2. Least absolute shrinkage and selection operator (LASSO) regression analysis allowed data dimension reduction and radiomics feature vector (FV) discovery. Multivariable analysis was performed to establish the relationship between FV, stage and overall survival (OS). Performance of the optimal FV was tested in an independent validation set of 204 patients, and a further independent set of 21 (TESTI) patients. Results Of 358 patients, 249 died within the follow-up period [median 22 (range 0–85) months]. From each primary tumor, 665 three-dimensional radiomics features from each of seven gray levels were extracted. The most predictive feature vector discovered (FVX) was independent of known prognostic factors, such as stage and tumor volume, and of interest to multi-center studies, invariant to the type of PET/CT manufacturer. Using the median cut-off, FVX predicted a 14-month survival difference in the validation cohort (N = 204, p = 0.00465; HR = 1.61, 95% CI 1.16–2.24). In the TESTI cohort, a smaller cohort that presented with unusually poor survival of stage I cancers, FVX correctly indicated a lack of survival difference (N = 21, p = 0.501). In contrast to the radiomics classifier, clinically routine PET variables including SUVmax, SUVmean and SUVpeak lacked any prognostic information. Conclusion PET-based radiomics classifiers derived from routine pre-treatment imaging possess intrinsic prognostic information for risk stratification of NSCLC patients to radiotherapy/chemo-radiotherapy. STUDY 2: Ovarian Cancer Purpose The 5-year survival of epithelial ovarian cancer is approximately 35-40%, prompting the need to develop additional methods such as biomarkers for personalised treatment. Patient and Methods 657 texture features were extracted from the CT scans of 364 untreated EOC patients. A 4-texture feature ‘Radiomic Prognostic Vector (RPV)’ was developed using machine learning methods on the training set. Results The RPV was able to identify the 5% of patients with the worst prognosis, significantly improving established prognostic methods and was further validated in two independent, multi-centre cohorts. In addition, the genetic, transcriptomic and proteomic analysis from two independent datasets demonstrated that stromal and DNA damage response pathways are activated in RPV-stratified tumours. Conclusion RPV could be used to guide personalised therapy of EOC. Overall, the two large datasets of different imaging modalities have increased our knowledge of texture analysis, improving the models currently available and provided us with more areas with which to implement these tools in the clinical setting.Open Acces

    An automatic system for classification of breast cancer lesions in ultrasound images

    Get PDF
    Breast cancer is the most common of all cancers and second most deadly cancer in women in the developed countries. Mammography and ultrasound imaging are the standard techniques used in cancer screening. Mammography is widely used as the primary tool for cancer screening, however it is invasive technique due to radiation used. Ultrasound seems to be good at picking up many cancers missed by mammography. In addition, ultrasound is non-invasive as no radiation is used, portable and versatile. However, ultrasound images have usually poor quality because of multiplicative speckle noise that results in artifacts. Because of noise segmentation of suspected areas in ultrasound images is a challenging task that remains an open problem despite many years of research. In this research, a new method for automatic detection of suspected breast cancer lesions using ultrasound is proposed. In this fully automated method, new de-noising and segmentation techniques are introduced and high accuracy classifier using combination of morphological and textural features is used. We use a combination of fuzzy logic and compounding to denoise ultrasound images and reduce shadows. We introduced a new method to identify the seed points and then use region growing method to perform segmentation. For preliminary classification we use three classifiers (ANN, AdaBoost, FSVM) and then we use a majority voting to get the final result. We demonstrate that our automated system performs better than the other state-of-the-art systems. On our database containing ultrasound images for 80 patients we reached accuracy of 98.75% versus ABUS method with 88.75% accuracy and Hybrid Filtering method with 92.50% accuracy. Future work would involve a larger dataset of ultrasound images and we will extend our system to handle colour ultrasound images. We will also study the impact of larger number of texture and morphological features as well as weighting scheme on performance of our classifier. We will also develop an automated method to identify the "wall thickness" of a mass in breast ultrasound images. Presently the wall thickness is extracted manually with the help of a physician
    corecore