4 research outputs found
Deep Boosted Regression for MR to CT Synthesis
Attenuation correction is an essential requirement of positron emission
tomography (PET) image reconstruction to allow for accurate quantification.
However, attenuation correction is particularly challenging for PET-MRI as
neither PET nor magnetic resonance imaging (MRI) can directly image tissue
attenuation properties. MRI-based computed tomography (CT) synthesis has been
proposed as an alternative to physics based and segmentation-based approaches
that assign a population-based tissue density value in order to generate an
attenuation map. We propose a novel deep fully convolutional neural network
that generates synthetic CTs in a recursive manner by gradually reducing the
residuals of the previous network, increasing the overall accuracy and
generalisability, while keeping the number of trainable parameters within
reasonable limits. The model is trained on a database of 20 pre-acquired MRI/CT
pairs and a four-fold random bootstrapped validation with a 80:20 split is
performed. Quantitative results show that the proposed framework outperforms a
state-of-the-art atlas-based approach decreasing the Mean Absolute Error (MAE)
from 131HU to 68HU for the synthetic CTs and reducing the PET reconstruction
error from 14.3% to 7.2%.Comment: Accepted at SASHIMI201
Deep learning for MRI-based CT synthesis: a comparison of MRI sequences and neural network architectures
[Otros] Synthetic computed tomography (CT) images
derived from magnetic resonance images (MRI) are of interest
for radiotherapy planning and positron emission tomography
(PET) attenuation correction. In recent years, deep learning
implementations have demonstrated improvement over atlasbased and segmentation-based methods. Nevertheless, several
open questions remain to be addressed, such as which is the best
of MRI sequences and neural network architectures. In this
work, we compared the performance of different combinations
of two common MRI sequences (T1- and T2-weighted), and
three state-of-the-art neural networks designed for medical
image processing (Vnet, HighRes3dNet and ScaleNet). The
experiments were conducted on brain datasets from a public
database. Our results suggest that T1 images perform better
than T2, but the results further improve when combining both
sequences. The lowest mean average error over the entire head
(MAE = 101.76 ± 10.4 HU) was achieved combining T1 and T2
scans with HighRes3dNet. All tested deep learning models
achieved significantly lower MAE (p < 0.01) than a well-known
atlas-based method.This work was supported by the Spanish Government grants TEC2016-79884-C2 and RTC-2016-5186-1, and by the European Union through the European Regional Development Fund (ERDF)Larroza, A.; Moliner, L.; Álvarez-Gómez, JM.; Oliver-Gil, S.; Espinós-Morató, H.; Vergara-Díaz, M.; Rodríguez-Álvarez, MJ. (2019). Deep learning for MRI-based CT synthesis: a comparison of MRI sequences and neural network architectures. IEEE. 1-4. https://doi.org/10.1109/NSS/MIC42101.2019.9060051S1
Registration of serial sections: An evaluation method based on distortions of the ground truths
Registration of histological serial sections is a challenging task. Serial
sections exhibit distortions and damage from sectioning. Missing information on
how the tissue looked before cutting makes a realistic validation of 2D
registrations extremely difficult.
This work proposes methods for ground-truth-based evaluation of
registrations. Firstly, we present a methodology to generate test data for
registrations. We distort an innately registered image stack in the manner
similar to the cutting distortion of serial sections. Test cases are generated
from existing 3D data sets, thus the ground truth is known. Secondly, our test
case generation premises evaluation of the registrations with known ground
truths. Our methodology for such an evaluation technique distinguishes this
work from other approaches. Both under- and over-registration become evident in
our evaluations. We also survey existing validation efforts.
We present a full-series evaluation across six different registration methods
applied to our distorted 3D data sets of animal lungs. Our distorted and ground
truth data sets are made publicly available.Comment: Supplemental data available under https://zenodo.org/record/428244