738 research outputs found

    Exploring variability in medical imaging

    Get PDF
    Although recent successes of deep learning and novel machine learning techniques improved the perfor- mance of classification and (anomaly) detection in computer vision problems, the application of these methods in medical imaging pipeline remains a very challenging task. One of the main reasons for this is the amount of variability that is encountered and encapsulated in human anatomy and subsequently reflected in medical images. This fundamental factor impacts most stages in modern medical imaging processing pipelines. Variability of human anatomy makes it virtually impossible to build large datasets for each disease with labels and annotation for fully supervised machine learning. An efficient way to cope with this is to try and learn only from normal samples. Such data is much easier to collect. A case study of such an automatic anomaly detection system based on normative learning is presented in this work. We present a framework for detecting fetal cardiac anomalies during ultrasound screening using generative models, which are trained only utilising normal/healthy subjects. However, despite the significant improvement in automatic abnormality detection systems, clinical routine continues to rely exclusively on the contribution of overburdened medical experts to diagnosis and localise abnormalities. Integrating human expert knowledge into the medical imaging processing pipeline entails uncertainty which is mainly correlated with inter-observer variability. From the per- spective of building an automated medical imaging system, it is still an open issue, to what extent this kind of variability and the resulting uncertainty are introduced during the training of a model and how it affects the final performance of the task. Consequently, it is very important to explore the effect of inter-observer variability both, on the reliable estimation of model’s uncertainty, as well as on the model’s performance in a specific machine learning task. A thorough investigation of this issue is presented in this work by leveraging automated estimates for machine learning model uncertainty, inter-observer variability and segmentation task performance in lung CT scan images. Finally, a presentation of an overview of the existing anomaly detection methods in medical imaging was attempted. This state-of-the-art survey includes both conventional pattern recognition methods and deep learning based methods. It is one of the first literature surveys attempted in the specific research area.Open Acces

    IMAGE PROCESSING, SEGMENTATION AND MACHINE LEARNING MODELS TO CLASSIFY AND DELINEATE TUMOR VOLUMES TO SUPPORT MEDICAL DECISION

    Get PDF
    Techniques for processing and analysing images and medical data have become the main’s translational applications and researches in clinical and pre-clinical environments. The advantages of these techniques are the improvement of diagnosis accuracy and the assessment of treatment response by means of quantitative biomarkers in an efficient way. In the era of the personalized medicine, an early and efficacy prediction of therapy response in patients is still a critical issue. In radiation therapy planning, Magnetic Resonance Imaging (MRI) provides high quality detailed images and excellent soft-tissue contrast, while Computerized Tomography (CT) images provides attenuation maps and very good hard-tissue contrast. In this context, Positron Emission Tomography (PET) is a non-invasive imaging technique which has the advantage, over morphological imaging techniques, of providing functional information about the patient’s disease. In the last few years, several criteria to assess therapy response in oncological patients have been proposed, ranging from anatomical to functional assessments. Changes in tumour size are not necessarily correlated with changes in tumour viability and outcome. In addition, morphological changes resulting from therapy occur slower than functional changes. Inclusion of PET images in radiotherapy protocols is desirable because it is predictive of treatment response and provides crucial information to accurately target the oncological lesion and to escalate the radiation dose without increasing normal tissue injury. For this reason, PET may be used for improving the Planning Treatment Volume (PTV). Nevertheless, due to the nature of PET images (low spatial resolution, high noise and weak boundary), metabolic image processing is a critical task. The aim of this Ph.D thesis is to develope smart methodologies applied to the medical imaging field to analyse different kind of problematic related to medical images and data analysis, working closely to radiologist physicians. Various issues in clinical environment have been addressed and a certain amount of improvements has been produced in various fields, such as organs and tissues segmentation and classification to delineate tumors volume using meshing learning techniques to support medical decision. In particular, the following topics have been object of this study: • Technique for Crohn’s Disease Classification using Kernel Support Vector Machine Based; • Automatic Multi-Seed Detection For MR Breast Image Segmentation; • Tissue Classification in PET Oncological Studies; • KSVM-Based System for the Definition, Validation and Identification of the Incisinal Hernia Reccurence Risk Factors; • A smart and operator independent system to delineate tumours in Positron Emission Tomography scans; 3 • Active Contour Algorithm with Discriminant Analysis for Delineating Tumors in Positron Emission Tomography; • K-Nearest Neighbor driving Active Contours to Delineate Biological Tumor Volumes; • Tissue Classification to Support Local Active Delineation of Brain Tumors; • A fully automatic system of Positron Emission Tomography Study segmentation. This work has been developed in collaboration with the medical staff and colleagues at the: • Dipartimento di Biopatologia e Biotecnologie Mediche e Forensi (DIBIMED), University of Palermo • Cannizzaro Hospital of Catania • Istituto di Bioimmagini e Fisiologia Molecolare (IBFM) Centro Nazionale delle Ricerche (CNR) of Cefalù • School of Electrical and Computer Engineering at Georgia Institute of Technology The proposed contributions have produced scientific publications in indexed computer science and medical journals and conferences. They are very useful in terms of PET and MRI image segmentation and may be used daily as a Medical Decision Support Systems to enhance the current methodology performed by healthcare operators in radiotherapy treatments. The future developments of this research concern the integration of data acquired by image analysis with the managing and processing of big data coming from a wide kind of heterogeneous sources

    Histopathological image analysis : a review

    Get PDF
    Over the past decade, dramatic increases in computational power and improvement in image analysis algorithms have allowed the development of powerful computer-assisted analytical approaches to radiological data. With the recent advent of whole slide digital scanners, tissue histopathology slides can now be digitized and stored in digital image form. Consequently, digitized tissue histopathology has now become amenable to the application of computerized image analysis and machine learning techniques. Analogous to the role of computer-assisted diagnosis (CAD) algorithms in medical imaging to complement the opinion of a radiologist, CAD algorithms have begun to be developed for disease detection, diagnosis, and prognosis prediction to complement the opinion of the pathologist. In this paper, we review the recent state of the art CAD technology for digitized histopathology. This paper also briefly describes the development and application of novel image analysis technology for a few specific histopathology related problems being pursued in the United States and Europe

    Quantitative analysis with machine learning models for multi-parametric brain imaging data

    Get PDF
    Gliomas are considered to be the most common primary adult malignant brain tumor. With the dramatic increases in computational power and improvements in image analysis algorithms, computer-aided medical image analysis has been introduced into clinical applications. Precision tumor grading and genotyping play an indispensable role in clinical diagnosis, treatment and prognosis. Gliomas diagnostic procedures include histopathological imaging tests, molecular imaging scans and tumor grading. Pathologic review of tumor morphology in histologic sections is the traditional method for cancer classification and grading, yet human study has limitations that can result in low reproducibility and inter-observer agreement. Compared with histopathological images, Magnetic resonance (MR) imaging present the different structure and functional features, which might serve as noninvasive surrogates for tumor genotypes. Therefore, computer-aided image analysis has been adopted in clinical application, which might partially overcome these shortcomings due to its capacity to quantitatively and reproducibly measure multilevel features on multi-parametric medical information. Imaging features obtained from a single modal image do not fully represent the disease, so quantitative imaging features, including morphological, structural, cellular and molecular level features, derived from multi-modality medical images should be integrated into computer-aided medical image analysis. The image quality differentiation between multi-modality images is a challenge in the field of computer-aided medical image analysis. In this thesis, we aim to integrate the quantitative imaging data obtained from multiple modalities into mathematical models of tumor prediction response to achieve additional insights into practical predictive value. Our major contributions in this thesis are: 1. Firstly, to resolve the imaging quality difference and observer-dependent in histological image diagnosis, we proposed an automated machine-learning brain tumor-grading platform to investigate contributions of multi-parameters from multimodal data including imaging parameters or features from Whole Slide Images (WSI) and the proliferation marker KI-67. For each WSI, we extract both visual parameters such as morphology parameters and sub-visual parameters including first-order and second-order features. A quantitative interpretable machine learning approach (Local Interpretable Model-Agnostic Explanations) was followed to measure the contribution of features for single case. Most grading systems based on machine learning models are considered “black boxes,” whereas with this system the clinically trusted reasoning could be revealed. The quantitative analysis and explanation may assist clinicians to better understand the disease and accordingly to choose optimal treatments for improving clinical outcomes. 2. Based on the automated brain tumor-grading platform we propose, multimodal Magnetic Resonance Images (MRIs) have been introduced in our research. A new imaging–tissue correlation based approach called RA-PA-Thomics was proposed to predict the IDH genotype. Inspired by the concept of image fusion, we integrate multimodal MRIs and the scans of histopathological images for indirect, fast, and cost saving IDH genotyping. The proposed model has been verified by multiple evaluation criteria for the integrated data set and compared to the results in the prior art. The experimental data set includes public data sets and image information from two hospitals. Experimental results indicate that the model provided improves the accuracy of glioma grading and genotyping

    Deep learning applications in the prostate cancer diagnostic pathway

    Get PDF
    Prostate cancer (PCa) is the second most frequently diagnosed cancer in men worldwide and the fifth leading cause of cancer death in men, with an estimated 1.4 million new cases in 2020 and 375,000 deaths. The risk factors most strongly associated to PCa are advancing age, family history, race, and mutations of the BRCA genes. Since the aforementioned risk factors are not preventable, early and accurate diagnoses are a key objective of the PCa diagnostic pathway. In the UK, clinical guidelines recommend multiparametric magnetic resonance imaging (mpMRI) of the prostate for use by radiologists to detect, score, and stage lesions that may correspond to clinically significant PCa (CSPCa), prior to confirmatory biopsy and histopathological grading. Computer-aided diagnosis (CAD) of PCa using artificial intelligence algorithms holds a currently unrealized potential to improve upon the diagnostic accuracy achievable by radiologist assessment of mpMRI, improve the reporting consistency between radiologists, and reduce reporting time. In this thesis, we build and evaluate deep learning-based CAD systems for the PCa diagnostic pathway, which address gaps identified in the literature. First, we introduce a novel patient-level classification framework, PCF, which uses a stacked ensemble of convolutional neural networks (CNNs) and support vector machines (SVMs) to assign a probability of having CSPCa to patients, using mpMRI and clinical features. Second, we introduce AutoProstate, a deep-learning powered framework for automated PCa assessment and reporting; AutoProstate utilizes biparametric MRI and clinical data to populate an automatic diagnostic report containing segmentations of the whole prostate, prostatic zones, and candidate CSPCa lesions, as well as several derived characteristics that are clinically valuable. Finally, as automatic segmentation algorithms have not yet reached the desired robustness for clinical use, we introduce interactive click-based segmentation applications for the whole prostate and prostatic lesions, with potential uses in diagnosis, active surveillance progression monitoring, and treatment planning

    Brain tumor classification using the diffusion tensor image segmentation (D-SEG) technique.

    Get PDF
    BACKGROUND: There is an increasing demand for noninvasive brain tumor biomarkers to guide surgery and subsequent oncotherapy. We present a novel whole-brain diffusion tensor imaging (DTI) segmentation (D-SEG) to delineate tumor volumes of interest (VOIs) for subsequent classification of tumor type. D-SEG uses isotropic (p) and anisotropic (q) components of the diffusion tensor to segment regions with similar diffusion characteristics. METHODS: DTI scans were acquired from 95 patients with low- and high-grade glioma, metastases, and meningioma and from 29 healthy subjects. D-SEG uses k-means clustering of the 2D (p,q) space to generate segments with different isotropic and anisotropic diffusion characteristics. RESULTS: Our results are visualized using a novel RGB color scheme incorporating p, q and T2-weighted information within each segment. The volumetric contribution of each segment to gray matter, white matter, and cerebrospinal fluid spaces was used to generate healthy tissue D-SEG spectra. Tumor VOIs were extracted using a semiautomated flood-filling technique and D-SEG spectra were computed within the VOI. Classification of tumor type using D-SEG spectra was performed using support vector machines. D-SEG was computationally fast and stable and delineated regions of healthy tissue from tumor and edema. D-SEG spectra were consistent for each tumor type, with constituent diffusion characteristics potentially reflecting regional differences in tissue microstructure. Support vector machines classified tumor type with an overall accuracy of 94.7%, providing better classification than previously reported. CONCLUSIONS: D-SEG presents a user-friendly, semiautomated biomarker that may provide a valuable adjunct in noninvasive brain tumor diagnosis and treatment planning

    Texture analysis of multimodal magnetic resonance images in support of diagnostic classification of childhood brain tumours

    Get PDF
    Primary brain tumours are recognised as the most common form of solid tumours in children, with pilocytic astrocytoma, medulloblastoma and ependymoma being found most frequently. Despite their high mortality rate, early detection can be facilitated through the use of Magnetic Resonance Imaging (MRI), which is the preferred scanning technique for paediatric patients. MRI offers a variety of imaging sequences through structural and functional imaging, as well as providing complementary tissue information. However visual examination of MR images provides limited ability to characterise distinct histological types of brain tumours. In order to improve diagnostic classification, we explore the use of a computer-aided system based on texture analysis (TA) methods. TA has been applied on conventional MRI but has been less commonly studied on diffusion MRI of brain-related pathology. Furthermore, the combination of textural features derived from both imaging approaches has not yet been widely studied. In this thesis, the aim of the research is to investigate TA based on multi-centre multimodal MRI, in order to provide more comprehensive information and develop an automated processing framework for the classification of childhood brain tumours
    • …
    corecore