1,014 research outputs found

    A graph-based approach for the retrieval of multi-modality medical images

    Get PDF
    Medical imaging has revolutionised modern medicine and is now an integral aspect of diagnosis and patient monitoring. The development of new imaging devices for a wide variety of clinical cases has spurred an increase in the data volume acquired in hospitals. These large data collections offer opportunities for search-based applications in evidence-based diagnosis, education, and biomedical research. However, conventional search methods that operate upon manual annotations are not feasible for this data volume. Content-based image retrieval (CBIR) is an image search technique that uses automatically derived visual features as search criteria and has demonstrable clinical benefits. However, very few studies have investigated the CBIR of multi-modality medical images, which are making a monumental impact in healthcare, e.g., combined positron emission tomography and computed tomography (PET-CT) for cancer diagnosis. In this thesis, we propose a new graph-based method for the CBIR of multi-modality medical images. We derive a graph representation that emphasises the spatial relationships between modalities by structurally constraining the graph based on image features, e.g., spatial proximity of tumours and organs. We also introduce a graph similarity calculation algorithm that prioritises the relationships between tumours and related organs. To enable effective human interpretation of retrieved multi-modality images, we also present a user interface that displays graph abstractions alongside complex multi-modality images. Our results demonstrated that our method achieved a high precision when retrieving images on the basis of tumour location within organs. The evaluation of our proposed UI design by user surveys revealed that it improved the ability of users to interpret and understand the similarity between retrieved PET-CT images. The work in this thesis advances the state-of-the-art by enabling a novel approach for the retrieval of multi-modality medical images

    Pattern Recognition-Based Analysis of COPD in CT

    Get PDF

    Sparse feature learning for image analysis in segmentation, classification, and disease diagnosis.

    Get PDF
    The success of machine learning algorithms generally depends on intermediate data representation, called features that disentangle the hidden factors of variation in data. Moreover, machine learning models are required to be generalized, in order to reduce the specificity or bias toward the training dataset. Unsupervised feature learning is useful in taking advantage of large amount of unlabeled data, which is available to capture these variations. However, learned features are required to capture variational patterns in data space. In this dissertation, unsupervised feature learning with sparsity is investigated for sparse and local feature extraction with application to lung segmentation, interpretable deep models, and Alzheimer\u27s disease classification. Nonnegative Matrix Factorization, Autoencoder and 3D Convolutional Autoencoder are used as architectures or models for unsupervised feature learning. They are investigated along with nonnegativity, sparsity and part-based representation constraints for generalized and transferable feature extraction

    Deep Domain Adaptation Learning Framework for Associating Image Features to Tumour Gene Profile

    Get PDF
    While medical imaging and general pathology are routine in cancer diagnosis, genetic sequencing is not always assessable due to the strong phenotypic and genetic heterogeneity of human cancers. Image-genomics integrates medical imaging and genetics to provide a complementary approach to optimise cancer diagnosis by associating tumour imaging traits with clinical data and has demonstrated its potential in identifying imaging surrogates for tumour biomarkers. However, existing image-genomics research has focused on quantifying tumour visual traits according to human understanding, which may not be optimal across different cancer types. The challenge hence lies in the extraction of optimised imaging representations in an objective data-driven manner. Such an approach requires large volumes of annotated image data that are difficult to acquire. We propose a deep domain adaptation learning framework for associating image features to tumour genetic information, exploiting the ability of domain adaptation technique to learn relevant image features from close knowledge domains. Our proposed framework leverages the current state-of-the-art in image object recognition to provide image features to encode subtle variations of tumour phenotypic characteristics with domain adaptation techniques. The proposed framework was evaluated with current state-of-the-art in: (i) tumour histopathology image classification and; (ii) image-genomics associations. The proposed framework demonstrated improved accuracy of tumour classification, as well as providing additional data-derived representations of tumour phenotypic characteristics that exhibit strong image-genomics association. This thesis advances and indicates the potential of image-genomics research to reveal additional imaging surrogates to genetic biomarkers, which has the potential to facilitate cancer diagnosis

    Image Registration to Map Endoscopic Video to Computed Tomography for Head and Neck Radiotherapy Patients

    Get PDF
    The purpose of this work was to explore the feasibility of registering endoscopic video to radiotherapy treatment plans for patients with head and neck cancer without physical tracking of the endoscope during the examination. Endoscopy-CT registration would provide a clinical tool that could be used to enhance the treatment planning process and would allow for new methods to study the incidence of radiation-related toxicity. Endoscopic video frames were registered to CT by optimizing virtual endoscope placement to maximize the similarity between the frame and the virtual image. Virtual endoscopic images were rendered using a polygonal mesh created by segmenting the airways of the head and neck with a density threshold. The optical properties of the virtual endoscope were matched to a calibrated model of the real endoscope. A novel registration algorithm was developed that takes advantage of physical constraints on the endoscope to effectively search the airways of the head and neck for the desired virtual endoscope coordinates. This algorithm was tested on rigid phantoms with embedded point markers and protruding bolus material. In these tests, the median registration accuracy was 3.0 mm for point measurements and 3.5 mm for surface measurements. The algorithm was also tested on four endoscopic examinations of three patients, in which it achieved a median registration accuracy of 9.9 mm. The uncertainties caused by the non-rigid anatomy of the head and neck and differences in patient positioning between endoscopic examinations and CT scans were examined by taking repeated measurements after placing the virtual endoscope in surface meshes created from different CT scans. Non-rigid anatomy introduced errors on the order of 1-3 mm. Patient positioning had a larger impact, introducing errors on the order of 3.5-4.5 mm. Endoscopy-CT registration in the head and neck is possible, but large registration errors were found in patients. The uncertainty analyses suggest a lower limit of 3-5 mm. Further development is required to achieve an accuracy suitable for clinical use

    Assessing emphysema in CT scans of the lungs:Using machine learning, crowdsourcing and visual similarity

    Get PDF

    Computed-Tomography (CT) Scan

    Get PDF
    A computed tomography (CT) scan uses X-rays and a computer to create detailed images of the inside of the body. CT scanners measure, versus different angles, X-ray attenuations when passing through different tissues inside the body through rotation of both X-ray tube and a row of X-ray detectors placed in the gantry. These measurements are then processed using computer algorithms to reconstruct tomographic (cross-sectional) images. CT can produce detailed images of many structures inside the body, including the internal organs, blood vessels, and bones. This book presents a comprehensive overview of CT scanning. Chapters address such topics as instrumental basics, CT imaging in coronavirus, radiation and risk assessment in chest imaging, positron emission tomography (PET), and feature extraction

    Imaging intact human organs with local resolution of cellular structures using hierarchical phase-contrast tomography

    Get PDF
    Imaging intact human organs from the organ to the cellular scale in three dimensions is a goal of biomedical imaging. To meet this challenge, we developed hierarchical phase-contrast tomography (HiP-CT), an X-ray phase propagation technique using the European Synchrotron Radiation Facility (ESRF)’s Extremely Brilliant Source (EBS). The spatial coherence of the ESRF-EBS combined with our beamline equipment, sample preparation and scanning developments enabled us to perform non-destructive, three-dimensional (3D) scans with hierarchically increasing resolution at any location in whole human organs. We applied HiP-CT to image five intact human organ types: brain, lung, heart, kidney and spleen. HiP-CT provided a structural overview of each whole organ followed by multiple higher-resolution volumes of interest, capturing organotypic functional units and certain individual specialized cells within intact human organs. We demonstrate the potential applications of HiP-CT through quantification and morphometry of glomeruli in an intact human kidney and identification of regional changes in the tissue architecture in a lung from a deceased donor with coronavirus disease 2019 (COVID-19)
    • …
    corecore