1,131 research outputs found

    Multi-Contrast Computed Tomography Atlas of Healthy Pancreas

    Full text link
    With the substantial diversity in population demographics, such as differences in age and body composition, the volumetric morphology of pancreas varies greatly, resulting in distinctive variations in shape and appearance. Such variations increase the difficulty at generalizing population-wide pancreas features. A volumetric spatial reference is needed to adapt the morphological variability for organ-specific analysis. Here, we proposed a high-resolution computed tomography (CT) atlas framework specifically optimized for the pancreas organ across multi-contrast CT. We introduce a deep learning-based pre-processing technique to extract the abdominal region of interests (ROIs) and leverage a hierarchical registration pipeline to align the pancreas anatomy across populations. Briefly, DEEDs affine and non-rigid registration are performed to transfer patient abdominal volumes to a fixed high-resolution atlas template. To generate and evaluate the pancreas atlas template, multi-contrast modality CT scans of 443 subjects (without reported history of pancreatic disease, age: 15-50 years old) are processed. Comparing with different registration state-of-the-art tools, the combination of DEEDs affine and non-rigid registration achieves the best performance for the pancreas label transfer across all contrast phases. We further perform external evaluation with another research cohort of 100 de-identified portal venous scans with 13 organs labeled, having the best label transfer performance of 0.504 Dice score in unsupervised setting. The qualitative representation (e.g., average mapping) of each phase creates a clear boundary of pancreas and its distinctive contrast appearance. The deformation surface renderings across scales (e.g., small to large volume) further illustrate the generalizability of the proposed atlas template

    Meshes Meet Voxels: Abdominal Organ Segmentation via Diffeomorphic Deformations

    Full text link
    Abdominal multi-organ segmentation from CT and MRI is an essential prerequisite for surgical planning and computer-aided navigation systems. Three-dimensional numeric representations of abdominal shapes are further important for quantitative and statistical analyses thereof. Existing methods in the field, however, are unable to extract highly accurate 3D representations that are smooth, topologically correct, and match points on a template. In this work, we present UNetFlow, a novel diffeomorphic shape deformation approach for abdominal organs. UNetFlow combines the advantages of voxel-based and mesh-based approaches for 3D shape extraction. Our results demonstrate high accuracy with respect to manually annotated CT data and better topological correctness compared to previous methods. In addition, we show the generalization of UNetFlow to MRI.Comment: Preprin

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Statistical deformation reconstruction using multi-organ shape features for pancreatic cancer localization

    Get PDF
    Respiratory motion and the associated deformations of abdominal organs and tumors are essential information in clinical applications. However, inter- and intra-patient multi-organ deformations are complex and have not been statistically formulated, whereas single organ deformations have been widely studied. In this paper, we introduce a multi-organ deformation library and its application to deformation reconstruction based on the shape features of multiple abdominal organs. Statistical multi-organ motion/deformation models of the stomach, liver, left and right kidneys, and duodenum were generated by shape matching their region labels defined on four-dimensional computed tomography images. A total of 250 volumes were measured from 25 pancreatic cancer patients. This paper also proposes a per-region-based deformation learning using the non-linear kernel model to predict the displacement of pancreatic cancer for adaptive radiotherapy. The experimental results show that the proposed concept estimates deformations better than general per-patient-based learning models and achieves a clinically acceptable estimation error with a mean distance of 1.2 ± 0.7 mm and a Hausdorff distance of 4.2 ± 2.3 mm throughout the respiratory motion

    Liver Shape Analysis using Statistical Parametric Maps at Population Scale

    Get PDF
    Background: Morphometric image analysis enables the quantification of differences in the shape and size of organs between individuals. Methods: Here we have applied morphometric methods to the study of the liver by constructing surface meshes from liver segmentations from abdominal MRI images in 33,434 participants in the UK Biobank. Based on these three dimensional mesh vertices, we evaluated local shape variations and modelled their association with anthropometric, phenotypic and clinical conditions, including liver disease and type-2 diabetes. Results: We found that age, body mass index, hepatic fat and iron content, as well as, health traits were significantly associated with regional liver shape and size. Interaction models in groups with specific clinical conditions showed that the presence of type-2 diabetes accelerates age-related changes in the liver, while presence of liver fat further increased shape variations in both type-2 diabetes and liver disease. Conclusions: The results suggest that this novel approach may greatly benefit studies aiming at better categorisation of pathologies associated with acute and chronic clinical conditions

    Image-to-Graph Convolutional Network for 2D/3D Deformable Model Registration of Low-Contrast Organs

    Get PDF
    Organ shape reconstruction based on a single-projection image during treatment has wide clinical scope, e.g., in image-guided radiotherapy and surgical guidance. We propose an image-to-graph convolutional network that achieves deformable registration of a three-dimensional (3D) organ mesh for a low-contrast two-dimensional (2D) projection image. This framework enables simultaneous training of two types of transformation: from the 2D projection image to a displacement map, and from the sampled per-vertex feature to a 3D displacement that satisfies the geometrical constraint of the mesh structure. Assuming application to radiation therapy, the 2D/3D deformable registration performance is verified for multiple abdominal organs that have not been targeted to date, i.e., the liver, stomach, duodenum, and kidney, and for pancreatic cancer. The experimental results show shape prediction considering relationships among multiple organs can be used to predict respiratory motion and deformation from digitally reconstructed radiographs with clinically acceptable accuracy

    NiftyNet: a deep-learning platform for medical imaging

    Get PDF
    Medical image analysis and computer-assisted intervention problems are increasingly being addressed with deep-learning-based solutions. Established deep-learning platforms are flexible but do not provide specific functionality for medical image analysis and adapting them for this application requires substantial implementation effort. Thus, there has been substantial duplication of effort and incompatible infrastructure developed across many research groups. This work presents the open-source NiftyNet platform for deep learning in medical imaging. The ambition of NiftyNet is to accelerate and simplify the development of these solutions, and to provide a common mechanism for disseminating research outputs for the community to use, adapt and build upon. NiftyNet provides a modular deep-learning pipeline for a range of medical imaging applications including segmentation, regression, image generation and representation learning applications. Components of the NiftyNet pipeline including data loading, data augmentation, network architectures, loss functions and evaluation metrics are tailored to, and take advantage of, the idiosyncracies of medical image analysis and computer-assisted intervention. NiftyNet is built on TensorFlow and supports TensorBoard visualization of 2D and 3D images and computational graphs by default. We present 3 illustrative medical image analysis applications built using NiftyNet: (1) segmentation of multiple abdominal organs from computed tomography; (2) image regression to predict computed tomography attenuation maps from brain magnetic resonance images; and (3) generation of simulated ultrasound images for specified anatomical poses. NiftyNet enables researchers to rapidly develop and distribute deep learning solutions for segmentation, regression, image generation and representation learning applications, or extend the platform to new applications.Comment: Wenqi Li and Eli Gibson contributed equally to this work. M. Jorge Cardoso and Tom Vercauteren contributed equally to this work. 26 pages, 6 figures; Update includes additional applications, updated author list and formatting for journal submissio

    Atlas construction and spatial normalisation to facilitate radiation-induced late effects research in childhood cancer

    Get PDF
    Reducing radiation-induced side effects is one of the most important challenges in paediatric cancer treatment. Recently, there has been growing interest in using spatial normalisation to enable voxel-based analysis of radiation-induced toxicities in a variety of patient groups. The need to consider three-dimensional distribution of doses, rather than dose-volume histograms, is desirable but not yet explored in paediatric populations. In this paper, we investigate the feasibility of atlas construction and spatial normalisation in paediatric radiotherapy. We used planning computed tomography (CT) scans from twenty paediatric patients historically treated with craniospinal irradiation to generate a template CT that is suitable for spatial normalisation. This childhood cancer population representative template was constructed using groupwise image registration. An independent set of 53 subjects from a variety of childhood malignancies was then used to assess the quality of the propagation of new subjects to this common reference space using deformable image registration (i.e., spatial normalisation). The method was evaluated in terms of overall image similarity metrics, contour similarity and preservation of dose-volume properties. After spatial normalisation, we report a dice similarity coefficient of 0.95±0.05, 0.85±0.04, 0.96±0.01, 0.91±0.03, 0.83±0.06 and 0.65±0.16 for brain and spinal canal, ocular globes, lungs, liver, kidneys and bladder. We then demonstrated the potential advantages of an atlas-based approach to study the risk of second malignant neoplasms after radiotherapy. Our findings indicate satisfactory mapping between a heterogeneous group of patients and the template CT. The poorest performance was for organs in the abdominal and pelvic region, likely due to respiratory and physiological motion and to the highly deformable nature of abdominal organs. More specialised algorithms should be explored in the future to improve mapping in these regions. This study is the first step toward voxel-based analysis in radiation-induced toxicities following paediatric radiotherapy
    corecore