7 research outputs found

    A cone-beam X-ray computed tomography data collection designed for machine learning

    Get PDF
    Unlike previous works, this open data collection consists of X-ray cone-beam (CB) computed tomography (CT) datasets specifically designed for machine learning applications and high cone-angle artefact reduction. Forty-two walnuts were scanned with a laboratory X-ray set-up to provide not only data from a single object but from a class of objects with natural variability. For each walnut, CB projections on three different source orbits were acquired to provide CB data with different cone angles as well as being able to compute artefact-free, high-quality ground truth images from the combined data that can be used for supervised learning. We provide the complete image reconstruction pipeline: raw projection data, a description of the scanning geometry, pre-processing and reconstruction scripts using open software, and the reconstructed volumes. Due to this, the dataset can not only be used for high cone-angle artefact reduction but also for algorithm development and evaluation for other tasks, such as image reconstruction from limited or sparse-angle (low-dose) scanning, super resolution, or segmentation

    Parallel-beam X-ray CT datasets of apples with internal defects and label balancing for machine learning

    Get PDF
    We present three parallel-beam tomographic datasets of 94 apples with internal defects along with defect label files. The datasets are prepared for development and testing of data-driven, learning-based image reconstruction, segmentation and post-processing methods. The three versions are a noiseless simulation; simulation with added Gaussian noise, and with scattering noise. The datasets are based on real 3D X-ray CT data and their subsequent volume reconstructions. The ground truth images, based on the volume reconstructions, are also available through this project. Apples contain various defects, which naturally introduce a label bias. We tackle this by formulating the bias as an optimization

    Quantitative comparison of deep learning-based image reconstruction methods for low-dose and sparse-angle CT applications

    Get PDF
    The reconstruction of computed tomography (CT) images is an active area of research. Following the rise of deep learning methods, many data-driven models have been proposed in recent years. In this work, we present the results of a data challenge that we organized, bringing together algorithm experts from different institutes to jointly work on quantitative evaluation of several data-driven methods on two large, public datasets during a ten day sprint. We focus on two applications of CT, namely, low-dose CT and sparse-angle CT. This enables us to fairly compare different methods using standardized settings. As a general result, we observe that the deep learning-based methods are able to improve the reconstruction quality metrics in both CT applications while the top performing methods show only minor differences in terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM). We further discuss a number of other important criteria that should be taken into account when selecting a method, such as the availability of training data, the knowledge of the physical measurement model and the reconstruction speed

    Appel à propositions pour la Nuit européenne des Chercheur.e.s

    Get PDF
    Le programme français pour la Nuit Européenne des Chercheur.e.s lance un appel à propositions pour une nouvelle Grande Expérience Participative (GEP) dans le cadre de son projet Creativity. Le formulaire de participation est à renvoyer avant le 30 novembre 2016 à l'adresse : [email protected]. Il est téléchargeable, ainsi que l'appel à projets à cette page. L’expérience sera sélectionnée sur la base des critères suivants : L’expérience doit faire preuve de solidité scientifique. Elle sera sél..

    Deep-learning-based joint rigid and deformable contour propagation for magnetic resonance imaging-guided prostate radiotherapy

    Get PDF
    Background: Deep learning-based unsupervised image registration has recently been proposed, promising fast registration. However, it has yet to be adopted in the online adaptive magnetic resonance imaging-guided radiotherapy (MRgRT) workflow. Purpose: In this paper, we design an unsupervised, joint rigid, and deformable registration framework for contour propagation in MRgRT of prostate cancer. Methods: Three-dimensional pelvic T2-weighted MRIs of 143 prostate cancer patients undergoing radiotherapy were collected and divided into 110, 13, and 20 patients for training, validation, and testing. We designed a framework using convolutional neural networks (CNNs) for rigid and deformable registration. We selected the deformable registration network architecture among U-Net, MS-D Net, and LapIRN and optimized the training strategy (end-to-end vs. sequential). The framework was compared against an iterative baseline registration. We evaluated registration accuracy (the Dice and Hausdorff distance of the prostate and bladder contours), structural similarity index, and folding percentage to compare the methods. We also evaluated the framework's robustness to rigid and elastic deformations and bias field perturbations. Results: The end-to-end trained framework comprising LapIRN for the deformable component achieved the best median (interquartile range) prostate and bladder Dice of 0.89 (0.85–0.91) and 0.86 (0.80–0.91), respectively. This accuracy was comparable to the iterative baseline registration: prostate and bladder Dice of 0.91 (0.88–0.93) and 0.86 (0.80–0.92). The best models complete rigid and deformable registration in 0.002 (0.0005) and 0.74 (0.43) s (Nvidia Tesla V100-PCIe 32 GB GPU), respectively. We found that the models are robust to translations up to 52 mm, rotations up to 15 (Formula presented.), elastic deformations up to 40 mm, and bias fields. Conclusions: Our proposed unsupervised, deep learning-based registration framework can perform rigid and deformable registration in less than a second with contour propagation accuracy comparable with iterative registration

    Efficient high cone-angle artifact reduction in circular cone-beam CT using deep learning with geometry-aware dimension reduction

    No full text
    High cone-angle artifacts (HCAAs) appear frequently in circular cone-beam computed tomography (CBCT) images and can heavily affect diagnosis and treatment planning. To reduce HCAAs in CBCT scans, we propose a novel deep learning approach that reduces the three-dimensional (3D) nature of HCAAs to two-dimensional (2D) problems in an efficient way. Specifically, we exploit the relationship between HCAAs and the rotational scanning geometry by training a convolutional neural network (CNN) using image slices that were radially sampled from CBCT scans. We evaluated this novel approach using a dataset of input CBCT scans affected by HCAAs and high-quality artifact-free target CBCT scans. Two different CNN architectures were employed, namely U-Net and a mixed-scale dense CNN (MS-D Net). The artifact reduction performance of the proposed approach was compared to that of a Cartesian slice-based artifact reduction deep learning approach in which a CNN was trained to remove the HCAAs from Cartesian slices. In addition, all processed CBCT scans were segmented to investigate the impact of HCAAs reduction on the quality of CBCT image segmentation. We demonstrate that the proposed deep learning approach with geometry-aware dimension reduction greatly reduces HCAAs in CBCT scans and outperforms the Cartesian slice-based deep learning approach. Moreover, the proposed artifact reduction approach markedly improves the accuracy of the subsequent segmentation task compared to the Cartesian slice-based workflow

    Virtual forensic anthropology: The accuracy of osteometric analysis of 3D bone models derived from clinical computed tomography (CT) scans

    No full text
    Clinical radiology is increasingly used as a source of data to test or develop forensic anthropological methods, especially in countries where contemporary skeletal collections are not available. Naturally, this requires analysis of the error that is a result of low accuracy of the modality (i.e. accuracy of the segmentation) and the error that arises due to difficulties in landmark recognition in virtual models. The cumulative effect of these errors ultimately determines whether virtual and dry bone measurements can be used interchangeably. To test the interchangeability of virtual and dry bone measurements, 13 male and 14 female intact cadavers from the body donation program of the Amsterdam UMC were CT scanned using a standard patient scanning protocol and processed to obtain the dry os coxae. These were again CT scanned using the same scanning protocol. All CT scans were segmented to create 3D virtual bone models of the os coxae (‘dry’ CT models and ‘clinical’ CT models). An Artec Spider 3D optical scanner was used to produce gold standard ‘optical 3D models’ of ten dry os coxae. The deviation of the surfaces of the 3D virtual bone models compared to the gold standard was used to calculate the accuracy of the CT models, both for the overall os coxae and for selected landmarks. Landmark recognition was studied by comparing the TEM and %TEM of nine traditional inter-landmark distances (ILDs). The percentage difference for the various ILDs between modalities was used to gauge the practical implications of both errors combined. Results showed that ‘dry’ CT models were 0.36–0.45 mm larger than the ‘optical 3D models’ (deviations −0.27 mm to 2.86 mm). ‘Clinical’ CT models were 0.64–0.88 mm larger than the ‘optical 3D models’ (deviations −4.99 mm to 5.00 mm). The accuracies of the ROIs were variable and larger for ‘clinical’ CT models than for ‘dry’ CT models. TEM and %TEM were generally in the acceptable ranges for all ILDs whilst no single modality was obviously more or less reliable than the others. For almost all ILDs, the average percentage difference between modalities was substantially larger than the average percentage difference between observers in ‘dry bone’ measurements only. Our results show that the combined error of segmentation- and landmark recognition error can be substantial, which may preclude the usage of ‘clinical’ CT scans as an alternative source for forensic anthropological reference data
    corecore