21 research outputs found

    Segmentation and Deformable Modelling Techniques for a Virtual Reality Surgical Simulator in Hepatic Oncology

    No full text
    Liver surgical resection is one of the most frequently used curative therapies. However, resectability is problematic. There is a need for a computer-assisted surgical planning and simulation system which can accurately and efficiently simulate the liver, vessels and tumours in actual patients. The present project describes the development of these core segmentation and deformable modelling techniques. For precise detection of irregularly shaped areas with indistinct boundaries, the segmentation incorporated active contours - gradient vector flow (GVF) snakes and level sets. To improve efficiency, a chessboard distance transform was used to replace part of the GVF effort. To automatically initialize the liver volume detection process, a rotating template was introduced to locate the starting slice. For shape maintenance during the segmentation process, a simplified object shape learning step was introduced to avoid occasional significant errors. Skeletonization with fuzzy connectedness was used for vessel segmentation. To achieve real-time interactivity, the deformation regime of this system was based on a single-organ mass-spring system (MSS), which introduced an on-the-fly local mesh refinement to raise the deformation accuracy and the mesh control quality. This method was now extended to a multiple soft-tissue constraint system, by supplementing it with an adaptive constraint mesh generation. A mesh quality measure was tailored based on a wide comparison of classic measures. Adjustable feature and parameter settings were thus provided, to make tissues of interest distinct from adjacent structures, keeping the mesh suitable for on-line topological transformation and deformation. More than 20 actual patient CT and 2 magnetic resonance imaging (MRI) liver datasets were tested to evaluate the performance of the segmentation method. Instrument manipulations of probing, grasping, and simple cutting were successfully simulated on deformable constraint liver tissue models. This project was implemented in conjunction with the Division of Surgery, Hammersmith Hospital, London; the preliminary reality effect was judged satisfactory by the consultant hepatic surgeon

    Incorporating Cardiac Substructures Into Radiation Therapy For Improved Cardiac Sparing

    Get PDF
    Growing evidence suggests that radiation therapy (RT) doses to the heart and cardiac substructures (CS) are strongly linked to cardiac toxicities, though only the heart is considered clinically. This work aimed to utilize the superior soft-tissue contrast of magnetic resonance (MR) to segment CS, quantify uncertainties in their position, assess their effect on treatment planning and an MR-guided environment. Automatic substructure segmentation of 12 CS was completed using a novel hybrid MR/computed tomography (CT) atlas method and was improved upon using a 3-dimensional neural network (U-Net) from deep learning. Intra-fraction motion due to respiration was then quantified. The inter-fraction setup uncertainties utilizing a novel MR-linear accelerator were also quantified. Treatment planning comparisons were performed with and without substructure inclusions and methods to reduce radiation dose to sensitive CS were evaluated. Lastly, these described technologies (deep learning U-Net) were translated to an MR-linear accelerator and a segmentation pipeline was created. Automatic segmentations from the hybrid MR/CT atlas was able to generate accurate segmentations for the chambers and great vessels (Dice similarity coefficient (DSC) \u3e 0.75) but coronary artery segmentations were unsuccessful (DSC\u3c0.3). After implementing deep learning, DSC for the chambers and great vessels was ≥0.85 along with an improvement in the coronary arteries (DSC\u3e0.5). Similar accuracy was achieved when implementing deep learning for MR-guided RT. On average, automatic segmentations required ~10 minutes to generate per patient and deep learning only required 14 seconds. The inclusion of CS in the treatment planning process did not yield statistically significant changes in plan complexity, PTV, or OAR dose. Automatic segmentation results from deep learning pose major efficiency and accuracy gains for CS segmentation offering high potential for rapid implementation into radiation therapy planning for improved cardiac sparing. Introducing CS into RT planning for MR-guided RT presented an opportunity for more effective sparing with limited increase in plan complexity

    Incorporating Cardiac Substructures Into Radiation Therapy For Improved Cardiac Sparing

    Get PDF
    Growing evidence suggests that radiation therapy (RT) doses to the heart and cardiac substructures (CS) are strongly linked to cardiac toxicities, though only the heart is considered clinically. This work aimed to utilize the superior soft-tissue contrast of magnetic resonance (MR) to segment CS, quantify uncertainties in their position, assess their effect on treatment planning and an MR-guided environment. Automatic substructure segmentation of 12 CS was completed using a novel hybrid MR/computed tomography (CT) atlas method and was improved upon using a 3-dimensional neural network (U-Net) from deep learning. Intra-fraction motion due to respiration was then quantified. The inter-fraction setup uncertainties utilizing a novel MR-linear accelerator were also quantified. Treatment planning comparisons were performed with and without substructure inclusions and methods to reduce radiation dose to sensitive CS were evaluated. Lastly, these described technologies (deep learning U-Net) were translated to an MR-linear accelerator and a segmentation pipeline was created. Automatic segmentations from the hybrid MR/CT atlas was able to generate accurate segmentations for the chambers and great vessels (Dice similarity coefficient (DSC) \u3e 0.75) but coronary artery segmentations were unsuccessful (DSC\u3c0.3). After implementing deep learning, DSC for the chambers and great vessels was ≥0.85 along with an improvement in the coronary arteries (DSC\u3e0.5). Similar accuracy was achieved when implementing deep learning for MR-guided RT. On average, automatic segmentations required ~10 minutes to generate per patient and deep learning only required 14 seconds. The inclusion of CS in the treatment planning process did not yield statistically significant changes in plan complexity, PTV, or OAR dose. Automatic segmentation results from deep learning pose major efficiency and accuracy gains for CS segmentation offering high potential for rapid implementation into radiation therapy planning for improved cardiac sparing. Introducing CS into RT planning for MR-guided RT presented an opportunity for more effective sparing with limited increase in plan complexity

    Multidimensional image analysis of cardiac function in MRI

    Get PDF
    Cardiac morphology is a key indicator of cardiac health. Important metrics that are currently in clinical use are left-ventricle cardiac ejection fraction, cardiac muscle (myocardium) mass, myocardium thickness and myocardium thickening over the cardiac cycle. Advances in imaging technologies have led to an increase in temporal and spatial resolution. Such an increase in data presents a laborious task for medical practitioners to analyse. In this thesis, measurement of the cardiac left-ventricle function is achieved by developing novel methods for the automatic segmentation of the left-ventricle blood-pool and the left ventricle myocardium boundaries. A preliminary challenge faced in this task is the removal of noise from Magnetic Resonance Imaging (MRI) data, which is addressed by using advanced data filtering procedures. Two mechanisms for left-ventricle segmentation are employed. Firstly segmentation of the left ventricle blood-pool for the measurement of ejection fraction is undertaken in the signal intensity domain. Utilising the high discrimination between blood and tissue, a novel methodology based on a statistical partitioning method offers success in localising and segmenting the blood pool of the left ventricle. From this initialisation, the estimation of the outer wall (epi-cardium) of the left ventricle can be achieved using gradient information and prior knowledge. Secondly, a more involved method for extracting the myocardium of the leftventricle is developed, that can better perform segmentation in higher dimensions. Spatial information is incorporated in the segmentation by employing a gradient-based boundary evolution. A level-set scheme is implemented and a novel formulation for the extraction of the cardiac muscle is introduced. Two surfaces, representing the inner and the outer boundaries of the left-ventricle, are simultaneously evolved using a coupling function and supervised with a probabilistic model of expertly assisted manual segmentations

    Medical Image Analysis: Progress over two decades and the challenges ahead

    Get PDF
    International audienceThe analysis of medical images has been woven into the fabric of the pattern analysis and machine intelligence (PAMI) community since the earliest days of these Transactions. Initially, the efforts in this area were seen as applying pattern analysis and computer vision techniques to another interesting dataset. However, over the last two to three decades, the unique nature of the problems presented within this area of study have led to the development of a new discipline in its own right. Examples of these include: the types of image information that are acquired, the fully three-dimensional image data, the nonrigid nature of object motion and deformation, and the statistical variation of both the underlying normal and abnormal ground truth. In this paper, we look at progress in the field over the last 20 years and suggest some of the challenges that remain for the years to come

    Deep learning applications in the prostate cancer diagnostic pathway

    Get PDF
    Prostate cancer (PCa) is the second most frequently diagnosed cancer in men worldwide and the fifth leading cause of cancer death in men, with an estimated 1.4 million new cases in 2020 and 375,000 deaths. The risk factors most strongly associated to PCa are advancing age, family history, race, and mutations of the BRCA genes. Since the aforementioned risk factors are not preventable, early and accurate diagnoses are a key objective of the PCa diagnostic pathway. In the UK, clinical guidelines recommend multiparametric magnetic resonance imaging (mpMRI) of the prostate for use by radiologists to detect, score, and stage lesions that may correspond to clinically significant PCa (CSPCa), prior to confirmatory biopsy and histopathological grading. Computer-aided diagnosis (CAD) of PCa using artificial intelligence algorithms holds a currently unrealized potential to improve upon the diagnostic accuracy achievable by radiologist assessment of mpMRI, improve the reporting consistency between radiologists, and reduce reporting time. In this thesis, we build and evaluate deep learning-based CAD systems for the PCa diagnostic pathway, which address gaps identified in the literature. First, we introduce a novel patient-level classification framework, PCF, which uses a stacked ensemble of convolutional neural networks (CNNs) and support vector machines (SVMs) to assign a probability of having CSPCa to patients, using mpMRI and clinical features. Second, we introduce AutoProstate, a deep-learning powered framework for automated PCa assessment and reporting; AutoProstate utilizes biparametric MRI and clinical data to populate an automatic diagnostic report containing segmentations of the whole prostate, prostatic zones, and candidate CSPCa lesions, as well as several derived characteristics that are clinically valuable. Finally, as automatic segmentation algorithms have not yet reached the desired robustness for clinical use, we introduce interactive click-based segmentation applications for the whole prostate and prostatic lesions, with potential uses in diagnosis, active surveillance progression monitoring, and treatment planning

    3D shape instantiation for intra-operative navigation from a single 2D projection

    Get PDF
    Unlike traditional open surgery where surgeons can see the operation area clearly, in robot-assisted Minimally Invasive Surgery (MIS), a surgeon’s view of the region of interest is usually limited. Currently, 2D images from fluoroscopy, Magnetic Resonance Imaging (MRI), endoscopy or ultrasound are used for intra-operative guidance as real-time 3D volumetric acquisition is not always possible due to the acquisition speed or exposure constraints. 3D reconstruction, however, is key to navigation in complex in vivo geometries and can help resolve this issue. Novel 3D shape instantiation schemes are developed in this thesis, which can reconstruct the high-resolution 3D shape of a target from limited 2D views, especially a single 2D projection or slice. To achieve a complete and automatic 3D shape instantiation pipeline, segmentation schemes based on deep learning are also investigated. These include normalization schemes for training U-Nets and network architecture design of Atrous Convolutional Neural Networks (ACNNs). For U-Net normalization, four popular normalization methods are reviewed, then Instance-Layer Normalization (ILN) is proposed. It uses a sigmoid function to linearly weight the feature map after instance normalization and layer normalization, and cascades group normalization after the weighted feature map. Detailed validation results potentially demonstrate the practical advantages of the proposed ILN for effective and robust segmentation of different anatomies. For network architecture design in training Deep Convolutional Neural Networks (DCNNs), the newly proposed ACNN is compared to traditional U-Net where max-pooling and deconvolutional layers are essential. Only convolutional layers are used in the proposed ACNN with different atrous rates and it has been shown that the method is able to provide a fully-covered receptive field with a minimum number of atrous convolutional layers. ACNN enhances the robustness and generalizability of the analysis scheme by cascading multiple atrous blocks. Validation results have shown the proposed method achieves comparable results to the U-Net in terms of medical image segmentation, whilst reducing the trainable parameters, thus improving the convergence and real-time instantiation speed. For 3D shape instantiation of soft and deforming organs during MIS, Sparse Principle Component Analysis (SPCA) has been used to analyse a 3D Statistical Shape Model (SSM) and to determine the most informative scan plane. Synchronized 2D images are then scanned at the most informative scan plane and are expressed in a 2D SSM. Kernel Partial Least Square Regression (KPLSR) has been applied to learn the relationship between the 2D and 3D SSM. It has been shown that the KPLSR-learned model developed in this thesis is able to predict the intra-operative 3D target shape from a single 2D projection or slice, thus permitting real-time 3D navigation. Validation results have shown the intrinsic accuracy achieved and the potential clinical value of the technique. The proposed 3D shape instantiation scheme is further applied to intra-operative stent graft deployment for the robot-assisted treatment of aortic aneurysms. Mathematical modelling is first used to simulate the stent graft characteristics. This is then followed by the Robust Perspective-n-Point (RPnP) method to instantiate the 3D pose of fiducial markers of the graft. Here, Equally-weighted Focal U-Net is proposed with a cross-entropy and an additional focal loss function. Detailed validation has been performed on patient-specific stent grafts with an accuracy between 1-3mm. Finally, the relative merits and potential pitfalls of all the methods developed in this thesis are discussed, followed by potential future research directions and additional challenges that need to be tackled.Open Acces
    corecore