60 research outputs found

    Trainable Joint Bilateral Filters for Enhanced Prediction Stability in Low-dose CT

    Get PDF
    Low-dose computed tomography (CT) denoising algorithms aim to enable reduced patient dose in routine CT acquisitions while maintaining high image quality. Recently, deep learning~(DL)-based methods were introduced, outperforming conventional denoising algorithms on this task due to their high model capacity. However, for the transition of DL-based denoising to clinical practice, these data-driven approaches must generalize robustly beyond the seen training data. We, therefore, propose a hybrid denoising approach consisting of a set of trainable joint bilateral filters (JBFs) combined with a convolutional DL-based denoising network to predict the guidance image. Our proposed denoising pipeline combines the high model capacity enabled by DL-based feature extraction with the reliability of the conventional JBF. The pipeline's ability to generalize is demonstrated by training on abdomen CT scans without metal implants and testing on abdomen scans with metal implants as well as on head CT data. When embedding two well-established DL-based denoisers (RED-CNN/QAE) in our pipeline, the denoising performance is improved by 10%10\,\%/82%82\,\% (RMSE) and 3%3\,\%/81%81\,\% (PSNR) in regions containing metal and by 6%6\,\%/78%78\,\% (RMSE) and 2%2\,\%/4%4\,\% (PSNR) on head CT data, compared to the respective vanilla model. Concluding, the proposed trainable JBFs limit the error bound of deep neural networks to facilitate the applicability of DL-based denoisers in low-dose CT pipelines

    A gradient-based approach to fast and accurate head motion compensation in cone-beam CT

    Full text link
    Cone-beam computed tomography (CBCT) systems, with their portability, present a promising avenue for direct point-of-care medical imaging, particularly in critical scenarios such as acute stroke assessment. However, the integration of CBCT into clinical workflows faces challenges, primarily linked to long scan duration resulting in patient motion during scanning and leading to image quality degradation in the reconstructed volumes. This paper introduces a novel approach to CBCT motion estimation using a gradient-based optimization algorithm, which leverages generalized derivatives of the backprojection operator for cone-beam CT geometries. Building on that, a fully differentiable target function is formulated which grades the quality of the current motion estimate in reconstruction space. We drastically accelerate motion estimation yielding a 19-fold speed-up compared to existing methods. Additionally, we investigate the architecture of networks used for quality metric regression and propose predicting voxel-wise quality maps, favoring autoencoder-like architectures over contracting ones. This modification improves gradient flow, leading to more accurate motion estimation. The presented method is evaluated through realistic experiments on head anatomy. It achieves a reduction in reprojection error from an initial average of 3mm to 0.61mm after motion compensation and consistently demonstrates superior performance compared to existing approaches. The analytic Jacobian for the backprojection operation, which is at the core of the proposed method, is made publicly available. In summary, this paper contributes to the advancement of CBCT integration into clinical workflows by proposing a robust motion estimation approach that enhances efficiency and accuracy, addressing critical challenges in time-sensitive scenarios.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Calibration by differentiation – Self‐supervised calibration for X‐ray microscopy using a differentiable cone‐beam reconstruction operator

    Get PDF
    High‐resolution X‐ray microscopy (XRM) is gaining interest for biological investigations of extremely small‐scale structures. XRM imaging of bones in living mice could provide new insights into the emergence and treatment of osteoporosis by observing osteocyte lacunae, which are holes in the bone of few micrometres in size. Imaging living animals at that resolution, however, is extremely challenging and requires very sophisticated data processing converting the raw XRM detector output into reconstructed images. This paper presents an open‐source, differentiable reconstruction pipeline for XRM data which analytically computes the final image from the raw measurements. In contrast to most proprietary reconstruction software, it offers the user full control over each processing step and, additionally, makes the entire pipeline deep learning compatible by ensuring differentiability. This allows fitting trainable modules both before and after the actual reconstruction step in a purely data‐driven way using the gradient‐based optimizers of common deep learning frameworks. The value of such differentiability is demonstrated by calibrating the parameters of a simple cupping correction module operating on the raw projection images using only a self‐supervisory quality metric based on the reconstructed volume and no further calibration measurements. The retrospective calibration directly improves image quality as it avoids cupping artefacts and decreases the difference in grey values between outer and inner bone by 68–94%. Furthermore, it makes the reconstruction process entirely independent of the XRM manufacturer and paves the way to explore modern deep learning reconstruction methods for arbitrary XRM and, potentially, other flat‐panel computed tomography systems. This exemplifies how differentiable reconstruction can be leveraged in the context of XRM and, hence, is an important step towards the goal of reducing the resolution limit of in vivo bone imaging to the single micrometre domain

    Identification and analysis of the secretome of plant pathogenic fungi reveals lifestyle adaptation

    Get PDF
    The secretory proteome plays an important role in the pathogenesis of phytopathogenic fungi. However, the relationship between the large-scale secretome of phytopathogenic fungi and their lifestyle is not fully understood. In the present study, the secretomes of 150 plant pathogenic fungi were predicted and the characteristics associated with different lifestyles were investigated. In total, 94,974 secreted proteins (SPs) were predicted from these fungi. The number of the SPs ranged from 64 to 1,662. Among these fungi, hemibiotrophic fungi had the highest number (average of 970) and proportion (7.1%) of SPs. Functional annotation showed that hemibiotrophic and necrotroph fungi, differ from biotrophic and symbiotic fungi, contained much more carbohydrate enzymes, especially polysaccharide lyases and carbohydrate esterases. Furthermore, the core and lifestyle-specific SPs orthogroups were identified. Twenty-seven core orthogroups contained 16% of the total SPs and their motif function annotation was represented by serine carboxypeptidase, carboxylesterase and asparaginase. In contrast, 97 lifestyle-specific orthogroups contained only 1% of the total SPs, with diverse functions such as PAN_AP in hemibiotroph-specific and flavin monooxygenases in necrotroph-specific. Moreover, obligate biotrophic fungi had the largest number of effectors (average of 150), followed by hemibiotrophic fungi (average of 120). Among these effectors, 4,155 had known functional annotation and pectin lyase had the highest proportion in the functionally annotated effectors. In addition, 32 sets of RNA-Seq data on pathogen-host interactions were collected and the expression levels of SPs were higher than that of non-SPs, and the expression level of effector genes was higher in biotrophic and hemibiotrophic fungi than in necrotrophic fungi, while secretase genes were highly expressed in necrotrophic fungi. Finally, the secretory activity of five predicted SPs from Setosphearia turcica was experimentally verified. In conclusion, our results provide a foundation for the study of pathogen-host interaction and help us to understand the fungal lifestyle adaptation

    On the Benefit of Dual-domain Denoising in a Self-supervised Low-dose CT Setting

    Full text link
    Computed tomography (CT) is routinely used for three-dimensional non-invasive imaging. Numerous data-driven image denoising algorithms were proposed to restore image quality in low-dose acquisitions. However, considerably less research investigates methods already intervening in the raw detector data due to limited access to suitable projection data or correct reconstruction algorithms. In this work, we present an end-to-end trainable CT reconstruction pipeline that contains denoising operators in both the projection and the image domain and that are optimized simultaneously without requiring ground-truth high-dose CT data. Our experiments demonstrate that including an additional projection denoising operator improved the overall denoising performance by 82.4-94.1%/12.5-41.7% (PSNR/SSIM) on abdomen CT and 1.5-2.9%/0.4-0.5% (PSNR/SSIM) on XRM data relative to the low-dose baseline. We make our entire helical CT reconstruction framework publicly available that contains a raw projection rebinning step to render helical projection data suitable for differentiable fan-beam reconstruction operators and end-to-end learning.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Ultralow‐parameter denoising: trainable bilateral filter layers in computed tomography

    Get PDF
    Background Computed tomography (CT) is widely used as an imaging tool to visualize three-dimensional structures with expressive bone-soft tissue contrast. However, CT resolution can be severely degraded through low-dose acquisitions, highlighting the importance of effective denoising algorithms. Purpose Most data-driven denoising techniques are based on deep neural networks, and therefore, contain hundreds of thousands of trainable parameters, making them incomprehensible and prone to prediction failures. Developing understandable and robust denoising algorithms achieving state-of-the-art performance helps to minimize radiation dose while maintaining data integrity. Methods This work presents an open-source CT denoising framework based on the idea of bilateral filtering. We propose a bilateral filter that can be incorporated into any deep learning pipeline and optimized in a purely data-driven way by calculating the gradient flow toward its hyperparameters and its input. Denoising in pure image-to-image pipelines and across different domains such as raw detector data and reconstructed volume, using a differentiable backprojection layer, is demonstrated. In contrast to other models, our bilateral filter layer consists of only four trainable parameters and constrains the applied operation to follow the traditional bilateral filter algorithm by design. Results Although only using three spatial parameters and one intensity range parameter per filter layer, the proposed denoising pipelines can compete with deep state-of-the-art denoising architectures with several hundred thousand parameters. Competitive denoising performance is achieved on x-ray microscope bone data and the 2016 Low Dose CT Grand Challenge data set. We report structural similarity index measures of 0.7094 and 0.9674 and peak signal-to-noise ratio values of 33.17 and 43.07 on the respective data sets. Conclusions Due to the extremely low number of trainable parameters with well-defined effect, prediction reliance and data integrity is guaranteed at any time in the proposed pipelines, in contrast to most other deep learning-based denoising architectures

    Exploring Epipolar Consistency Conditions for Rigid Motion Compensation in In-vivo X-ray Microscopy

    Full text link
    Intravital X-ray microscopy (XRM) in preclinical mouse models is of vital importance for the identification of microscopic structural pathological changes in the bone which are characteristic of osteoporosis. The complexity of this method stems from the requirement for high-quality 3D reconstructions of the murine bones. However, respiratory motion and muscle relaxation lead to inconsistencies in the projection data which result in artifacts in uncompensated reconstructions. Motion compensation using epipolar consistency conditions (ECC) has previously shown good performance in clinical CT settings. Here, we explore whether such algorithms are suitable for correcting motion-corrupted XRM data. Different rigid motion patterns are simulated and the quality of the motion-compensated reconstructions is assessed. The method is able to restore microscopic features for out-of-plane motion, but artifacts remain for more realistic motion patterns including all six degrees of freedom of rigid motion. Therefore, ECC is valuable for the initial alignment of the projection data followed by further fine-tuning of motion parameters using a reconstruction-based method

    Noise2Contrast: Multi-Contrast Fusion Enables Self-Supervised Tomographic Image Denoising

    Full text link
    Self-supervised image denoising techniques emerged as convenient methods that allow training denoising models without requiring ground-truth noise-free data. Existing methods usually optimize loss metrics that are calculated from multiple noisy realizations of similar images, e.g., from neighboring tomographic slices. However, those approaches fail to utilize the multiple contrasts that are routinely acquired in medical imaging modalities like MRI or dual-energy CT. In this work, we propose the new self-supervised training scheme Noise2Contrast that combines information from multiple measured image contrasts to train a denoising model. We stack denoising with domain-transfer operators to utilize the independent noise realizations of different image contrasts to derive a self-supervised loss. The trained denoising operator achieves convincing quantitative and qualitative results, outperforming state-of-the-art self-supervised methods by 4.7-11.0%/4.8-7.3% (PSNR/SSIM) on brain MRI data and by 43.6-50.5%/57.1-77.1% (PSNR/SSIM) on dual-energy CT X-ray microscopy data with respect to the noisy baseline. Our experiments on different real measured data sets indicate that Noise2Contrast training generalizes to other multi-contrast imaging modalities

    A Hierarchical Framework for Design Space Exploration and Optimization of TTP-Based Distributed Embedded Systems

    No full text
    Time-triggered protocol (TTP) is a time-division multiple access (TDMA)-based bus protocol designed for use in safety-critical avionics anti automotive distributed embedded systems. Design space exploration (I)SE) for TTP-based distributed embedded system involves searching through a vast design space of possible task-to-CPU mappings, task/message schedules and has access configurations to achieve certain design objectives. In this paper, we present an efficient two-level hierarchical DSE framework for TTP-based distributed embedded systems, with the objective of minimizing the total bus utilization while meeting an end-to-end deadline constraint. Logic-based Benders decomposition (LBBD) is used to divide the problem into a master problem of mapping tasks to CPU nodes to minimize the total bus utilization, solved with a satisfiability modulo theories (SMT) solver, and a subproblem of finding a feasible solution of bus access configuration and task/message schedule under an end-to-end deadline constraint for a given task-to-CPU mapping, solved with a constraint programming (CP) solver. Performance evaluation results show that our approach is scalable to problems with realistic size
    corecore