8 research outputs found

    Quantitative lung CT analysis for the study and diagnosis of Chronic Obstructive Pulmonary Disease

    Get PDF
    The importance of medical imaging in the research of Chronic Obstructive Pulmonary Dis- ease (COPD) has risen over the last decades. COPD affects the pulmonary system through two competing mechanisms; emphysema and small airways disease. The relative contribu- tion of each component varies widely across patients whilst they can also evolve regionally in the lung. Patients can also be susceptible to exacerbations, which can dramatically ac- celerate lung function decline. Diagnosis of COPD is based on lung function tests, which measure airflow limitation. There is a growing consensus that this is inadequate in view of the complexities of COPD. Computed Tomography (CT) facilitates direct quantification of the pathological changes that lead to airflow limitation and can add to our understanding of the disease progression of COPD. There is a need to better capture lung pathophysiology whilst understanding regional aspects of disease progression. This has motivated the work presented in this thesis. Two novel methods are proposed to quantify the severity of COPD from CT by analysing the global distribution of features sampled locally in the lung. They can be exploited in the classification of lung CT images or to uncover potential trajectories of disease progression. A novel lobe segmentation algorithm is presented that is based on a probabilistic segmen- tation of the fissures whilst also constructing a groupwise fissure prior. In combination with the local sampling methods, a pipeline of analysis was developed that permits a re- gional analysis of lung disease. This was applied to study exacerbation susceptible COPD. Lastly, the applicability of performing disease progression modelling to study COPD has been shown. Two main subgroups of COPD were found, which are consistent with current clinical knowledge of COPD subtypes. This research may facilitate precise phenotypic characterisation of COPD from CT, which will increase our understanding of its natural history and associated heterogeneities. This will be instrumental in the precision medicine of COPD

    Stochastic Filter Groups for Multi-Task CNNs: Learning Specialist and Generalist Convolution Kernels

    Get PDF
    The performance of multi-task learning in Convolutional Neural Networks (CNNs) hinges on the design of feature sharing between tasks within the architecture. The number of possible sharing patterns are combinatorial in the depth of the network and the number of tasks, and thus hand-crafting an architecture, purely based on the human intuitions of task relationships can be time-consuming and suboptimal. In this paper, we present a probabilistic approach to learning task-specific and shared representations in CNNs for multi-task learning. Specifically, we propose "stochastic filter groups'' (SFG), a mechanism to assign convolution kernels in each layer to "specialist'' or "generalist'' groups, which are specific to or shared across different tasks, respectively. The SFG modules determine the connectivity between layers and the structures of task-specific and shared representations in the network. We employ variational inference to learn the posterior distribution over the possible grouping of kernels and network parameters. Experiments demonstrate that the proposed method generalises across multiple tasks and shows improved performance over baseline methods.Comment: Accepted for oral presentation at ICCV 201

    Uncertainty in multitask learning: joint representations for probabilistic MR-only radiotherapy planning

    Full text link
    Multi-task neural network architectures provide a mechanism that jointly integrates information from distinct sources. It is ideal in the context of MR-only radiotherapy planning as it can jointly regress a synthetic CT (synCT) scan and segment organs-at-risk (OAR) from MRI. We propose a probabilistic multi-task network that estimates: 1) intrinsic uncertainty through a heteroscedastic noise model for spatially-adaptive task loss weighting and 2) parameter uncertainty through approximate Bayesian inference. This allows sampling of multiple segmentations and synCTs that share their network representation. We test our model on prostate cancer scans and show that it produces more accurate and consistent synCTs with a better estimation in the variance of the errors, state of the art results in OAR segmentation and a methodology for quality assurance in radiotherapy treatment planning.Comment: Early-accept at MICCAI 2018, 8 pages, 4 figure

    Learning task-specific and shared representations in medical imaging

    Get PDF
    The performance of multi-task learning hinges on the design of feature sharing between tasks; a process which is combinatorial in the network depth and task count. Hand-crafting an architecture based on human intuitions of task relationships is therefore suboptimal. In this paper, we present a probabilistic approach to learning task-specific and shared representations in Convolutional Neural Networks (CNNs) for multi-task learning of semantic tasks. We introduce Stochastic Filter Groups; which is a mechanism that groups convolutional kernels into task-specific and shared groups to learn an optimal kernel allocation. They facilitate learning optimal shared and task specific representations. We employ variational inference to learn the posterior distribution over the possible grouping of kernels and CNN weights. Experiments on MRI-based prostate radiotherapy organ segmentation and CT synthesis demonstrate that the proposed method learns optimal task allocations that are inline with human-optimised networks whilst improving performance over competing baselines

    A spatio-temporal network for video semantic segmentation in surgical videos

    No full text
    PURPOSE: Semantic segmentation in surgical videos has applications in intra-operative guidance, post-operative analytics and surgical education. Models need to provide accurate predictions since temporally inconsistent identification of anatomy can hinder patient safety. We propose a novel architecture for modelling temporal relationships in videos to address these issues. METHODS: We developed a temporal segmentation model that includes a static encoder and a spatio-temporal decoder. The encoder processes individual frames whilst the decoder learns spatio-temporal relationships from frame sequences. The decoder can be used with any suitable encoder to improve temporal consistency. RESULTS: Model performance was evaluated on the CholecSeg8k dataset and a private dataset of robotic Partial Nephrectomy procedures. Mean Intersection over Union improved by 1.30% and 4.27% respectively for each dataset when the temporal decoder was applied. Our model also displayed improvements in temporal consistency up to 7.23%. CONCLUSIONS: This work demonstrates an advance in video segmentation of surgical scenes with potential applications in surgery with a view to improve patient outcomes. The proposed decoder can extend state-of-the-art static models, and it is shown that it can improve per-frame segmentation output and video temporal consistency
    corecore