959 research outputs found

    CTprintNet: An Accurate and Stable Deep Unfolding Approach for Few-View CT Reconstruction

    Get PDF
    In this paper, we propose a new deep learning approach based on unfolded neural networks for the reconstruction of X-ray computed tomography images from few views. We start from a model-based approach in a compressed sensing framework, described by the minimization of a least squares function plus an edge-preserving prior on the solution. In particular, the proposed network automatically estimates the internal parameters of a proximal interior point method for the solution of the optimization problem. The numerical tests performed on both a synthetic and a real dataset show the effectiveness of the framework in terms of accuracy and robustness with respect to noise on the input sinogram when compared to other different data-driven approaches

    PyHST2: an hybrid distributed code for high speed tomographic reconstruction with iterative reconstruction and a priori knowledge capabilities

    Full text link
    We present the PyHST2 code which is in service at ESRF for phase-contrast and absorption tomography. This code has been engineered to sustain the high data flow typical of the third generation synchrotron facilities (10 terabytes per experiment) by adopting a distributed and pipelined architecture. The code implements, beside a default filtered backprojection reconstruction, iterative reconstruction techniques with a-priori knowledge. These latter are used to improve the reconstruction quality or in order to reduce the required data volume and reach a given quality goal. The implemented a-priori knowledge techniques are based on the total variation penalisation and a new recently found convex functional which is based on overlapping patches. We give details of the different methods and their implementations while the code is distributed under free license. We provide methods for estimating, in the absence of ground-truth data, the optimal parameters values for a-priori techniques

    Conditioning Generative Latent Optimization to solve Imaging Inverse Problems

    Full text link
    Computed Tomography (CT) is a prominent example of Imaging Inverse Problem (IIP), highlighting the unrivalled performances of data-driven methods in degraded measurements setups like sparse X-ray projections. Although a significant proportion of deep learning approaches benefit from large supervised datasets to directly map experimental measurements to medical scans, they cannot generalize to unknown acquisition setups. In contrast, fully unsupervised techniques, most notably using score-based generative models, have recently demonstrated similar or better performances compared to supervised approaches to solve IIPs while being flexible at test time regarding the imaging setup. However, their use cases are limited by two factors: (a) they need considerable amounts of training data to have good generalization properties and (b) they require a backward operator, like Filtered-Back-Projection in the case of CT, to condition the learned prior distribution of medical scans to experimental measurements. To overcome these issues, we propose an unsupervised conditional approach to the Generative Latent Optimization framework (cGLO), in which the parameters of a decoder network are initialized on an unsupervised dataset. The decoder is then used for reconstruction purposes, by performing Generative Latent Optimization with a loss function directly comparing simulated measurements from proposed reconstructions to experimental measurements. The resulting approach, tested on sparse-view CT using multiple training dataset sizes, demonstrates better reconstruction quality compared to state-of-the-art score-based strategies in most data regimes and shows an increasing performance advantage for smaller training datasets and reduced projection angles. Furthermore, cGLO does not require any backward operator and could expand use cases even to non-linear IIPs.Comment: comments: 20 pages, 9 figures; typos correcte

    Improving Automated Hemorrhage Detection in Sparse-view Computed Tomography via Deep Convolutional Neural Network based Artifact Reduction

    Full text link
    Purpose: Sparse-view computed tomography (CT) is an effective way to reduce dose by lowering the total number of views acquired, albeit at the expense of image quality, which, in turn, can impact the ability to detect diseases. We explore deep learning-based artifact reduction in sparse-view cranial CT scans and its impact on automated hemorrhage detection. Methods: We trained a U-Net for artefact reduction on simulated sparse-view cranial CT scans from 3000 patients obtained from a public dataset and reconstructed with varying levels of sub-sampling. Additionally, we trained a convolutional neural network on fully sampled CT data from 17,545 patients for automated hemorrhage detection. We evaluated the classification performance using the area under the receiver operator characteristic curves (AUC-ROCs) with corresponding 95% confidence intervals (CIs) and the DeLong test, along with confusion matrices. The performance of the U-Net was compared to an analytical approach based on total variation (TV). Results: The U-Net performed superior compared to unprocessed and TV-processed images with respect to image quality and automated hemorrhage diagnosis. With U-Net post-processing, the number of views can be reduced from 4096 (AUC-ROC: 0.974; 95% CI: 0.972-0.976) views to 512 views (0.973; 0.971-0.975) with minimal decrease in hemorrhage detection (P<.001) and to 256 views (0.967; 0.964-0.969) with a slight performance decrease (P<.001). Conclusion: The results suggest that U-Net based artifact reduction substantially enhances automated hemorrhage detection in sparse-view cranial CTs. Our findings highlight that appropriate post-processing is crucial for optimal image quality and diagnostic accuracy while minimizing radiation dose.Comment: 11 pages, 6 figures, 1 tabl

    Some proximal methods for Poisson intensity CBCT and PET

    No full text
    International audienceCone-Beam Computerized Tomography (CBCT) and Positron Emission Tomography (PET) are two complementary medical imaging modalities providing respectively anatomic and metabolic information on a patient. In the context of public health, one must address the problem of dose reduction of the potentially harmful quantities related to each exam protocol : X-rays for CBCT and radiotracer for PET. Two demonstrators based on a technological breakthrough (acquisition devices work in photon-counting mode) have been developed. It turns out that in this low-dose context, i.e. for low intensity signals acquired by photon counting devices, noise should not be approximated anymore by a Gaussian distribution, but is following a Poisson distribution. We investigate in this paper the two related tomographic reconstruction problems. We formulate separately the CBCT and the PET problems in two general frameworks that encompass the physics of the acquisition devices and the specific discretization of the object to reconstruct. We propose various fast numerical schemes based on proximal methods to compute the solution of each problem. In particular, we show that primal-dual approaches are well suited in the PET case when considering non differentiable regularizations such as Total Variation. Experiments on numerical simulations and real data are in favor of the proposed algorithms when compared with well-established methods
    • …
    corecore