13 research outputs found
A Learning-based Method for Online Adjustment of C-arm Cone-Beam CT Source Trajectories for Artifact Avoidance
During spinal fusion surgery, screws are placed close to critical nerves
suggesting the need for highly accurate screw placement. Verifying screw
placement on high-quality tomographic imaging is essential. C-arm Cone-beam CT
(CBCT) provides intraoperative 3D tomographic imaging which would allow for
immediate verification and, if needed, revision. However, the reconstruction
quality attainable with commercial CBCT devices is insufficient, predominantly
due to severe metal artifacts in the presence of pedicle screws. These
artifacts arise from a mismatch between the true physics of image formation and
an idealized model thereof assumed during reconstruction. Prospectively
acquiring views onto anatomy that are least affected by this mismatch can,
therefore, improve reconstruction quality. We propose to adjust the C-arm CBCT
source trajectory during the scan to optimize reconstruction quality with
respect to a certain task, i.e. verification of screw placement. Adjustments
are performed on-the-fly using a convolutional neural network that regresses a
quality index for possible next views given the current x-ray image. Adjusting
the CBCT trajectory to acquire the recommended views results in non-circular
source orbits that avoid poor images, and thus, data inconsistencies. We
demonstrate that convolutional neural networks trained on realistically
simulated data are capable of predicting quality metrics that enable
scene-specific adjustments of the CBCT source trajectory. Using both
realistically simulated data and real CBCT acquisitions of a
semi-anthropomorphic phantom, we show that tomographic reconstructions of the
resulting scene-specific CBCT acquisitions exhibit improved image quality
particularly in terms of metal artifacts. Since the optimization objective is
implicitly encoded in a neural network, the proposed approach overcomes the
need for 3D information at run-time.Comment: 12 page
Trainable Joint Bilateral Filters for Enhanced Prediction Stability in Low-dose CT
Low-dose computed tomography (CT) denoising algorithms aim to enable reduced
patient dose in routine CT acquisitions while maintaining high image quality.
Recently, deep learning~(DL)-based methods were introduced, outperforming
conventional denoising algorithms on this task due to their high model
capacity. However, for the transition of DL-based denoising to clinical
practice, these data-driven approaches must generalize robustly beyond the seen
training data. We, therefore, propose a hybrid denoising approach consisting of
a set of trainable joint bilateral filters (JBFs) combined with a convolutional
DL-based denoising network to predict the guidance image. Our proposed
denoising pipeline combines the high model capacity enabled by DL-based feature
extraction with the reliability of the conventional JBF. The pipeline's ability
to generalize is demonstrated by training on abdomen CT scans without metal
implants and testing on abdomen scans with metal implants as well as on head CT
data. When embedding two well-established DL-based denoisers (RED-CNN/QAE) in
our pipeline, the denoising performance is improved by / (RMSE)
and / (PSNR) in regions containing metal and by /
(RMSE) and / (PSNR) on head CT data, compared to the respective
vanilla model. Concluding, the proposed trainable JBFs limit the error bound of
deep neural networks to facilitate the applicability of DL-based denoisers in
low-dose CT pipelines
A gradient-based approach to fast and accurate head motion compensation in cone-beam CT
Cone-beam computed tomography (CBCT) systems, with their portability, present
a promising avenue for direct point-of-care medical imaging, particularly in
critical scenarios such as acute stroke assessment. However, the integration of
CBCT into clinical workflows faces challenges, primarily linked to long scan
duration resulting in patient motion during scanning and leading to image
quality degradation in the reconstructed volumes. This paper introduces a novel
approach to CBCT motion estimation using a gradient-based optimization
algorithm, which leverages generalized derivatives of the backprojection
operator for cone-beam CT geometries. Building on that, a fully differentiable
target function is formulated which grades the quality of the current motion
estimate in reconstruction space. We drastically accelerate motion estimation
yielding a 19-fold speed-up compared to existing methods. Additionally, we
investigate the architecture of networks used for quality metric regression and
propose predicting voxel-wise quality maps, favoring autoencoder-like
architectures over contracting ones. This modification improves gradient flow,
leading to more accurate motion estimation. The presented method is evaluated
through realistic experiments on head anatomy. It achieves a reduction in
reprojection error from an initial average of 3mm to 0.61mm after motion
compensation and consistently demonstrates superior performance compared to
existing approaches. The analytic Jacobian for the backprojection operation,
which is at the core of the proposed method, is made publicly available. In
summary, this paper contributes to the advancement of CBCT integration into
clinical workflows by proposing a robust motion estimation approach that
enhances efficiency and accuracy, addressing critical challenges in
time-sensitive scenarios.Comment: This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessibl
Calibration by differentiation – Self‐supervised calibration for X‐ray microscopy using a differentiable cone‐beam reconstruction operator
High‐resolution X‐ray microscopy (XRM) is gaining interest for biological investigations of extremely small‐scale structures. XRM imaging of bones in living mice could provide new insights into the emergence and treatment of osteoporosis by observing osteocyte lacunae, which are holes in the bone of few micrometres in size. Imaging living animals at that resolution, however, is extremely challenging and requires very sophisticated data processing converting the raw XRM detector output into reconstructed images. This paper presents an open‐source, differentiable reconstruction pipeline for XRM data which analytically computes the final image from the raw measurements. In contrast to most proprietary reconstruction software, it offers the user full control over each processing step and, additionally, makes the entire pipeline deep learning compatible by ensuring differentiability. This allows fitting trainable modules both before and after the actual reconstruction step in a purely data‐driven way using the gradient‐based optimizers of common deep learning frameworks. The value of such differentiability is demonstrated by calibrating the parameters of a simple cupping correction module operating on the raw projection images using only a self‐supervisory quality metric based on the reconstructed volume and no further calibration measurements. The retrospective calibration directly improves image quality as it avoids cupping artefacts and decreases the difference in grey values between outer and inner bone by 68–94%. Furthermore, it makes the reconstruction process entirely independent of the XRM manufacturer and paves the way to explore modern deep learning reconstruction methods for arbitrary XRM and, potentially, other flat‐panel computed tomography systems. This exemplifies how differentiable reconstruction can be leveraged in the context of XRM and, hence, is an important step towards the goal of reducing the resolution limit of in vivo bone imaging to the single micrometre domain
Ultralow‐parameter denoising: trainable bilateral filter layers in computed tomography
Background
Computed tomography (CT) is widely used as an imaging tool to visualize three-dimensional structures with expressive bone-soft tissue contrast. However, CT resolution can be severely degraded through low-dose acquisitions, highlighting the importance of effective denoising algorithms.
Purpose
Most data-driven denoising techniques are based on deep neural networks, and therefore, contain hundreds of thousands of trainable parameters, making them incomprehensible and prone to prediction failures. Developing understandable and robust denoising algorithms achieving state-of-the-art performance helps to minimize radiation dose while maintaining data integrity.
Methods
This work presents an open-source CT denoising framework based on the idea of bilateral filtering. We propose a bilateral filter that can be incorporated into any deep learning pipeline and optimized in a purely data-driven way by calculating the gradient flow toward its hyperparameters and its input. Denoising in pure image-to-image pipelines and across different domains such as raw detector data and reconstructed volume, using a differentiable backprojection layer, is demonstrated. In contrast to other models, our bilateral filter layer consists of only four trainable parameters and constrains the applied operation to follow the traditional bilateral filter algorithm by design.
Results
Although only using three spatial parameters and one intensity range parameter per filter layer, the proposed denoising pipelines can compete with deep state-of-the-art denoising architectures with several hundred thousand parameters. Competitive denoising performance is achieved on x-ray microscope bone data and the 2016 Low Dose CT Grand Challenge data set. We report structural similarity index measures of 0.7094 and 0.9674 and peak signal-to-noise ratio values of 33.17 and 43.07 on the respective data sets.
Conclusions
Due to the extremely low number of trainable parameters with well-defined effect, prediction reliance and data integrity is guaranteed at any time in the proposed pipelines, in contrast to most other deep learning-based denoising architectures
On the Benefit of Dual-domain Denoising in a Self-supervised Low-dose CT Setting
Computed tomography (CT) is routinely used for three-dimensional non-invasive
imaging. Numerous data-driven image denoising algorithms were proposed to
restore image quality in low-dose acquisitions. However, considerably less
research investigates methods already intervening in the raw detector data due
to limited access to suitable projection data or correct reconstruction
algorithms. In this work, we present an end-to-end trainable CT reconstruction
pipeline that contains denoising operators in both the projection and the image
domain and that are optimized simultaneously without requiring ground-truth
high-dose CT data. Our experiments demonstrate that including an additional
projection denoising operator improved the overall denoising performance by
82.4-94.1%/12.5-41.7% (PSNR/SSIM) on abdomen CT and 1.5-2.9%/0.4-0.5%
(PSNR/SSIM) on XRM data relative to the low-dose baseline. We make our entire
helical CT reconstruction framework publicly available that contains a raw
projection rebinning step to render helical projection data suitable for
differentiable fan-beam reconstruction operators and end-to-end learning.Comment: This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessibl
A learning-based method for online adjustment of C-arm Cone-beam CT source trajectories for artifact avoidance
Abstract
Purpose
During spinal fusion surgery, screws are placed close to critical nerves suggesting the need for highly accurate screw placement. Verifying screw placement on high-quality tomographic imaging is essential. C-arm cone-beam CT (CBCT) provides intraoperative 3D tomographic imaging which would allow for immediate verification and, if needed, revision. However, the reconstruction quality attainable with commercial CBCT devices is insufficient, predominantly due to severe metal artifacts in the presence of pedicle screws. These artifacts arise from a mismatch between the true physics of image formation and an idealized model thereof assumed during reconstruction. Prospectively acquiring views onto anatomy that are least affected by this mismatch can, therefore, improve reconstruction quality.
Methods
We propose to adjust the C-arm CBCT source trajectory during the scan to optimize reconstruction quality with respect to a certain task, i.e., verification of screw placement. Adjustments are performed on-the-fly using a convolutional neural network that regresses a quality index over all possible next views given the current X-ray image. Adjusting the CBCT trajectory to acquire the recommended views results in non-circular source orbits that avoid poor images, and thus, data inconsistencies.
Results
We demonstrate that convolutional neural networks trained on realistically simulated data are capable of predicting quality metrics that enable scene-specific adjustments of the CBCT source trajectory. Using both realistically simulated data as well as real CBCT acquisitions of a semianthropomorphic phantom, we show that tomographic reconstructions of the resulting scene-specific CBCT acquisitions exhibit improved image quality particularly in terms of metal artifacts.
Conclusion
The proposed method is a step toward online patient-specific C-arm CBCT source trajectories that enable high-quality tomographic imaging in the operating room. Since the optimization objective is implicitly encoded in a neural network trained on large amounts of well-annotated projection images, the proposed approach overcomes the need for 3D information at run-time
Exploring Epipolar Consistency Conditions for Rigid Motion Compensation in In-vivo X-ray Microscopy
Intravital X-ray microscopy (XRM) in preclinical mouse models is of vital
importance for the identification of microscopic structural pathological
changes in the bone which are characteristic of osteoporosis. The complexity of
this method stems from the requirement for high-quality 3D reconstructions of
the murine bones. However, respiratory motion and muscle relaxation lead to
inconsistencies in the projection data which result in artifacts in
uncompensated reconstructions. Motion compensation using epipolar consistency
conditions (ECC) has previously shown good performance in clinical CT settings.
Here, we explore whether such algorithms are suitable for correcting
motion-corrupted XRM data. Different rigid motion patterns are simulated and
the quality of the motion-compensated reconstructions is assessed. The method
is able to restore microscopic features for out-of-plane motion, but artifacts
remain for more realistic motion patterns including all six degrees of freedom
of rigid motion. Therefore, ECC is valuable for the initial alignment of the
projection data followed by further fine-tuning of motion parameters using a
reconstruction-based method
Noise2Contrast: Multi-Contrast Fusion Enables Self-Supervised Tomographic Image Denoising
Self-supervised image denoising techniques emerged as convenient methods that
allow training denoising models without requiring ground-truth noise-free data.
Existing methods usually optimize loss metrics that are calculated from
multiple noisy realizations of similar images, e.g., from neighboring
tomographic slices. However, those approaches fail to utilize the multiple
contrasts that are routinely acquired in medical imaging modalities like MRI or
dual-energy CT. In this work, we propose the new self-supervised training
scheme Noise2Contrast that combines information from multiple measured image
contrasts to train a denoising model. We stack denoising with domain-transfer
operators to utilize the independent noise realizations of different image
contrasts to derive a self-supervised loss. The trained denoising operator
achieves convincing quantitative and qualitative results, outperforming
state-of-the-art self-supervised methods by 4.7-11.0%/4.8-7.3% (PSNR/SSIM) on
brain MRI data and by 43.6-50.5%/57.1-77.1% (PSNR/SSIM) on dual-energy CT X-ray
microscopy data with respect to the noisy baseline. Our experiments on
different real measured data sets indicate that Noise2Contrast training
generalizes to other multi-contrast imaging modalities
Deep learning for terahertz image denoising in nondestructive historical document analysis
Abstract Historical documents contain essential information about the past, including places, people, or events. Many of these valuable cultural artifacts cannot be further examined due to aging or external influences, as they are too fragile to be opened or turned over, so their rich contents remain hidden. Terahertz (THz) imaging is a nondestructive 3D imaging technique that can be used to reveal the hidden contents without damaging the documents. As noise or imaging artifacts are predominantly present in reconstructed images processed by standard THz reconstruction algorithms, this work intends to improve THz image quality with deep learning. To overcome the data scarcity problem in training a supervised deep learning model, an unsupervised deep learning network (CycleGAN) is first applied to generate paired noisy THz images from clean images (clean images are generated by a handwriting generator). With such synthetic noisy-to-clean paired images, a supervised deep learning model using Pix2pixGAN is trained, which is effective to enhance real noisy THz images. After Pix2pixGAN denoising, 99% characters written on one-side of the Xuan paper can be clearly recognized, while 61% characters written on one-side of the standard paper are sufficiently recognized. The average perceptual indices of Pix2pixGAN processed images are 16.83, which is very close to the average perceptual index 16.19 of clean handwriting images. Our work has important value for THz-imaging-based nondestructive historical document analysis