12 research outputs found
PYRO-NN: Python Reconstruction Operators in Neural Networks
Purpose: Recently, several attempts were conducted to transfer deep learning
to medical image reconstruction. An increasingly number of publications follow
the concept of embedding the CT reconstruction as a known operator into a
neural network. However, most of the approaches presented lack an efficient CT
reconstruction framework fully integrated into deep learning environments. As a
result, many approaches are forced to use workarounds for mathematically
unambiguously solvable problems. Methods: PYRO-NN is a generalized framework to
embed known operators into the prevalent deep learning framework Tensorflow.
The current status includes state-of-the-art parallel-, fan- and cone-beam
projectors and back-projectors accelerated with CUDA provided as Tensorflow
layers. On top, the framework provides a high level Python API to conduct FBP
and iterative reconstruction experiments with data from real CT systems.
Results: The framework provides all necessary algorithms and tools to design
end-to-end neural network pipelines with integrated CT reconstruction
algorithms. The high level Python API allows a simple use of the layers as
known from Tensorflow. To demonstrate the capabilities of the layers, the
framework comes with three baseline experiments showing a cone-beam short scan
FDK reconstruction, a CT reconstruction filter learning setup, and a TV
regularized iterative reconstruction. All algorithms and tools are referenced
to a scientific publication and are compared to existing non deep learning
reconstruction frameworks. The framework is available as open-source software
at \url{https://github.com/csyben/PYRO-NN}. Conclusions: PYRO-NN comes with the
prevalent deep learning framework Tensorflow and allows to setup end-to-end
trainable neural networks in the medical image reconstruction context. We
believe that the framework will be a step towards reproducible researchComment: V1: Submitted to Medical Physics, 11 pages, 7 figure
Projection image-to-image translation in hybrid X-ray/MR imaging
The potential benefit of hybrid X-ray and MR imaging in the interventional
environment is large due to the combination of fast imaging with high contrast
variety. However, a vast amount of existing image enhancement methods requires
the image information of both modalities to be present in the same domain. To
unlock this potential, we present a solution to image-to-image translation from
MR projections to corresponding X-ray projection images. The approach is based
on a state-of-the-art image generator network that is modified to fit the
specific application. Furthermore, we propose the inclusion of a gradient map
in the loss function to allow the network to emphasize high-frequency details
in image generation. Our approach is capable of creating X-ray projection
images with natural appearance. Additionally, our extensions show clear
improvement compared to the baseline method.Comment: In proceedings of SPIE Medical Imaging 201
Projection-to-Projection Translation for Hybrid X-ray and Magnetic Resonance Imaging
Hybrid X-ray and magnetic resonance (MR) imaging promises large potential in interventional medical imaging applications due to the broad variety of contrast of MRI combined with fast imaging of X-ray-based modalities. To fully utilize the potential of the vast amount of existing image enhancement techniques, the corresponding information from both modalities must be present in the same domain. For image-guided interventional procedures, X-ray fluoroscopy has proven to be the modality of choice. Synthesizing one modality from another in this case is an ill-posed problem due to ambiguous signal and overlapping structures in projective geometry. To take on these challenges, we present a learning-based solution to MR to X-ray projection-to-projection translation. We propose an image generator network that focuses on high representation capacity in higher resolution layers to allow for accurate synthesis of fine details in the projection images. Additionally, a weighting scheme in the loss computation that favors high-frequency structures is proposed to focus on the important details and contours in projection imaging. The proposed extensions prove valuable in generating X-ray projection images with natural appearance. Our approach achieves a deviation from the ground truth of only 6% and structural similarity measure of 0.913 ± 0.005. In particular the high frequency weighting assists in generating projection images with sharp appearance and reduces erroneously synthesized fine details
Fully-automatic CT data preparation for interventional X-ray skin dose simulation
Recently, deep learning (DL) found its way to interventional X-ray skin dose
estimation. While its performance was found to be acceptable, even more
accurate results could be achieved if more data sets were available for
training. One possibility is to turn to computed tomography (CT) data sets.
Typically, computed tomography (CT) scans can be mapped to tissue labels and
mass densities to obtain training data. However, care has to be taken to make
sure that the different clinical settings are properly accounted for. First,
the interventional environment is characterized by wide variety of table setups
that are significantly different from the typical patient tables used in
conventional CT. This cannot be ignored, since tables play a crucial role in
sound skin dose estimation in an interventional setup, e. g., when the X-ray
source is directly underneath a patient (posterior-anterior view). Second, due
to interpolation errors, most CT scans do not facilitate a clean segmentation
of the skin border. As a solution to these problems, we applied connected
component labeling (CCL) and Canny edge detection to (a) robustly separate the
patient from the table and (b) to identify the outermost skin layer. Our
results show that these extensions enable fully-automatic, generalized
pre-processing of CT scans for further simulation of both skin dose and
corresponding X-ray projections.Comment: 6 pages, 4 figures, Bildverarbeitung f\"ur die Medizin 2020, code
will be accessible soon (url