40 research outputs found

    Collaborative Artificial Intelligence Algorithms for Medical Imaging Applications

    Get PDF
    In this dissertation, we propose novel machine learning algorithms for high-risk medical imaging applications. Specifically, we tackle current challenges in radiology screening process and introduce cutting-edge methods for image-based diagnosis, detection and segmentation. We incorporate expert knowledge through eye-tracking, making the whole process human-centered. This dissertation contributes to machine learning, computer vision, and medical imaging research by: 1) introducing a mathematical formulation of radiologists level of attention, and sparsifying their gaze data for a better extraction and comparison of search patterns. 2) proposing novel, local and global, image analysis algorithms. Imaging based diagnosis and pattern analysis are high-risk Artificial Intelligence applications. A standard radiology screening procedure includes detection, diagnosis and measurement (often done with segmentation) of abnormalities. We hypothesize that having a true collaboration is essential for a better control mechanism, in such applications. In this regard, we propose to form a collaboration medium between radiologists and machine learning algorithms through eye-tracking. Further, we build a generic platform consisting of novel machine learning algorithms for each of these tasks. Our collaborative algorithm utilizes eye tracking and includes an attention model and gaze-pattern analysis, based on data clustering and graph sparsification. Then, we present a semi-supervised multi-task network for local analysis of image in radiologists\u27 ROIs, extracted in the previous step. To address missing tumors and analyze regions that are completely missed by radiologists during screening, we introduce a detection framework, S4ND: Single Shot Single Scale Lung Nodule Detection. Our proposed detection algorithm is specifically designed to handle tiny abnormalities in lungs, which are easy to miss by radiologists. Finally, we introduce a novel projective adversarial framework, PAN: Projective Adversarial Network for Medical Image Segmentation, for segmenting complex 3D structures/organs, which can be beneficial in the screening process by guiding radiologists search areas through segmentation of desired structure/organ

    Computed tomography reading strategies in lung cancer screening

    Get PDF

    Visual expertise as embodied practice

    Get PDF
    This study looks at the practice of thoracic radiology and follows a group of radiologists and radiophysicists in their efforts to find, discuss, and formulate issues or troubles ensuing the implementation of a new radiographic imaging technology. Based in the theoretical tradition of ethnomethodology it examines the local endogenous practices pertaining to the radiologists’ expertise in the interpretation of visual representations and tries to explicate the ways in which they draw upon various resources in order to accomplish their professional tasks. As the study is addressing the topic of visual expertise it also aims to do so in terms that acknowledge that all expertise is rooted in embodied practices. The analysis follows a case of what is called the enacted production of radiological reasoning. One of the central features of the described work is the manner in which it is carried out by way of the living present body of an expert. The experienced radiologist interweaves anatomical and technological terminology with visual representations and gestures in such a way that none of these components can be said to be superfluous to the argumentation. As a consequence, we should appreciate gestures and embodied actions as important means through which expertise become organised. These are parts of a repertoire of methods through which the experts learn their profession. In addition, gestures can also become enrolled in the re-negotiation of expertise in the face of new challenges

    Quantitative Analysis of Radiation-Associated Parenchymal Lung Change

    Get PDF
    Radiation-induced lung damage (RILD) is a common consequence of thoracic radiotherapy (RT). We present here a novel classification of the parenchymal features of RILD. We developed a deep learning algorithm (DLA) to automate the delineation of 5 classes of parenchymal texture of increasing density. 200 scans were used to train and validate the network and the remaining 30 scans were used as a hold-out test set. The DLA automatically labelled the data with Dice Scores of 0.98, 0.43, 0.26, 0.47 and 0.92 for the 5 respective classes. Qualitative evaluation showed that the automated labels were acceptable in over 80% of cases for all tissue classes, and achieved similar ratings to the manual labels. Lung registration was performed and the effect of radiation dose on each tissue class and correlation with respiratory outcomes was assessed. The change in volume of each tissue class over time generated by manual and automated segmentation was calculated. The 5 parenchymal classes showed distinct temporal patterns We quantified the volumetric change in textures after radiotherapy and correlate these with radiotherapy dose and respiratory outcomes. The effect of local dose on tissue class revealed a strong dose-dependent relationship We have developed a novel classification of parenchymal changes associated with RILD that show a convincing dose relationship. The tissue classes are related to both global and local dose metrics, and have a distinct evolution over time. Although less strong, there is a relationship between the radiological texture changes we can measure and respiratory outcomes, particularly the MRC score which directly represents a patient’s functional status. We have demonstrated the potential of using our approach to analyse and understand the morphological and functional evolution of RILD in greater detail than previously possible
    corecore