69 research outputs found

    Tolerance to geometrical inaccuracies in CBCT systems:a comprehensive study

    Get PDF
    Purpose: The last decades have seen the consolidation of the cone-beam CT (CBCT) technology, which is nowadays widely used for different applications such as micro-CT for small animals, mammography, dentistry, or surgical procedures. Some CBCT systems may suffer mechanical strains due to the heavy load of the x-ray tube. This fact, together with tolerances in the manufacturing process, lead to different types of undesirable effects in the reconstructed image unless they are properly accounted for during the reconstruction. To obtain good quality images, it is necessary to have a complete characterization of the system geometry including the angular position of the gantry, the source-object and detector-object distances, and the position and pose of the detector. These parameters can be obtained through a calibration process done periodically, depending on the stability of the system geometry. To the best of our knowledge, there are no comprehensive works studying the effect of inaccuracies in the geometrical calibration of CBCT systems in a systematic and quantitative way. In this work, we describe the effects of detector misalignments (linear shifts, rotation, and inclinations) on the image and define their tolerance as the maximum error that keeps the image free from artifacts. Methods: We used simulations of four phantoms including systematic and random misalignments. Reconstructions of these data with and without errors were compared to identify the artifacts introduced in the reconstructed image and the tolerance to miscalibration deemed to provide acceptable image quality. Results: Visual assessment provided an easy guideline to identify the sources of error by visual inspection of the artifactual images. Systematic errors result in blurring, shape distortion and/or reduction of the axial field of view while random errors produce streaks and blurring in all cases, with a tolerance which is more than twice that of systematic errors. The tolerance corresponding to errors in position of the detector along the tangential direction, that is, skew (<0.2°) and horizontal shift (<0.4 mm), is tighter than the tolerance to those errors affecting the position along the longitudinal direction or the magnification, that is, vertical shift (<2 mm), roll (<1.5°), tilt (<2°), and SDD (<3 mm). Conclusion: We present a comprehensive study, based on realistic simulations, of the effects on the reconstructed image quality of errors in the geometrical characterization of a CBCT system and define their tolerance. These results could be used to guide the design of new systems, establishing the mechanical precision that must be achieved, and to help in the definition of an optimal geometrical calibration process. Also, the thorough visual assessment may be valuable to identify the most predominant sources of error based on the effects shown in the reconstructed image.This work has been supported by Ministerio de Ciencia, Innovación y Universidades, Agencia Estatal de Investigación, project “DPI2016-79075-R - AEI/FEDER, UE”, Instituto de Salud Carlos III, project “DTS17/00122”, cofunded by European Regional Development Fund (ERDF), “A way of making Europe”. Also partially funded by project “DEEPCT-CM-UC3M,” funded by the call "Programa de apoyo a la realización de proyectos interdisciplinares de I+D para jóvenes investigadores de la Universidad Carlos III de Madrid 2019-2020 en el marco del Convenio Plurianual Comunidad de Madrid - Universidad Carlos III de Madrid” and project “RADCOV19,” funded by CRUE Universidades, CSIC and Banco Santander (Fondo Supera). The CNIC is supported by the Ministerio de Ciencia, Innovación y Universidades and the Pro CNIC Foundation, and is a Severo Ochoa Center of Excellence (SEV-2015 -0505

    Exploiting parallelism in a X-ray tomography reconstruction algorithm on hybrid multi-GPU and multi-core platforms

    Get PDF
    Proceedings of: 2012 10 th IEEE International Symposium on Parallel and Distributes Processing with Applicatioons (ISPA 2012). Leganés, Madrid, 10-13 July 2012.Most small-animal X-ray computed tomography (CT) scanners are based on cone-beam geometry with a flat-panel detector orbiting in a circular trajectory. Image reconstruction in these systems is usually performed by approximate methods based on the algorithm proposed by Feldkamp et al. Currently there are a strong need to speed-up the reconstruction of XRay CT data in order to extend its clinical applications. We present an efficient modular implementation of an FDK-based reconstruction algorithm that takes advantage of the parallel computing capabilities and the efficient bilinear interpolation provided by general purpose graphic processing units (GPGPU). The proposed implementation of the algorithm is evaluated for a high-resolution micro-CT and achieves a speed-up of 46, while preserving the reconstructed image qualiThis work has been partially funded by AMIT Project CDTI CENIT, TEC2007-64731, TEC2008-06715-C02-01, RD07/0014/2009, TRA2009 0175, RECAVA-RETIC, RD09/0077/00087 (Ministerio de Ciencia e Innovacion), ARTEMIS S2009/DPI-1802 (Comunidad de Madrid), and TIN2010-16497 (Ministerio de Ciencia e Innovacion).Publicad

    Simplified statistical image reconstruction for X-ray CT with beam-hardening artifact compensation

    Get PDF
    CT images are often affected by beam-hardening artifacts due to the polychromatic nature of the X-ray spectra. These artifacts appear in the image as cupping in homogeneous areas and as dark bands between dense regions, such as bones. This paper proposes a simplified statistical reconstruction method for X-ray CT based on Poisson statistics that accounts for the non-linearities caused by beam hardening. The main advantages of the proposed method over previous algorithms is that it avoids the preliminary segmentation step, which can be tricky, especially for low-dose scans, and it does not require knowledge of the whole source spectrum, which is often unknown. Each voxel attenuation is modeled as a mixture of bone and soft tissue by defining density-dependent tissue fractions, maintaining one unknown per voxel. We approximate the energy-dependent attenuation corresponding to different combinations of bone and soft tissue, so called beam-hardening function, with the 1D function corresponding to water plus two parameters that can be tuned empirically. Results on both simulated data with Poisson sinogram noise and two rodent studies acquired with the ARGUSCT system showed a beam hardening reduction (both cupping and dark bands) similar to analytical reconstruction followed by post-processing techniques, but with reduced noise and streaks in cases with low number of projections, as expected for statistical image reconstruction.This work was partially funded by NIH grants R01-HL-098686 and U01 EB018753, by Spanish Ministerio de Economia y Competitividad (projects TEC2013-47270-R and RTC-2014-3028-1) and the Spanish Ministerio de Economia, Industria y Competitividad (projects DPI2016-79075-R AEI/FEDER, UE - Agencia Estatal de Investigación and DTS17/00122 Instituto de Salud Carlos III - FIS), and co-financed by ERDF (FEDER) Funds from the European Commission, “A way of making Europe”. The CNIC is supported by the Spanish Ministerio de Economia, Industria y Competitividad and the Pro CNIC Foundation, and is a Severo Ochoa Center of Excellence (SEV-2015-0505).En prens

    Surfing the optimization space of a multiple-GPU parallel implementation of a X-ray tomography reconstruction algorithm

    Get PDF
    The increasing popularity of massively parallel architectures based on accelerators have opened up the possibility of significantly improving the performance of X-ray computed tomography (CT) applications towards achieving real-time imaging. However, achieving this goal is a challenging process, as most CT applications have not been designed for exploiting the amount of parallelism existing in these architectures. In this paper we present the massively parallel implementation and optimization of Mangoose(++), a CT application for reconstructing 3D volumes from 20 images collected by scanners based on cone-beam geometry. The main contribution of this paper are the following. First, we develop a modular application design that allows to exploit the functional parallelism inside the application and to facilitate the parallelization of individual application phases. Second, we identify a set of optimizations that can be applied individually and in combination for optimally deploying the application on a massively parallel multi-GPU system. Third, we present a study of surfing the optimization space of the modularized application and demonstrate that a significant benefit can be obtained from employing the adequate combination of application optimizations. (C) 2014 Elsevier Inc. All rights reserved.This work was partially funded by the Spanish Ministry of Science and Technology under the grant TIN2010-16497, the AMIT project (CEN-20101014) from the CDTI-CENIT program, RECAVA-RETIC Network (RD07/0014/2009), projects TEC2010-21619-C04-01, TEC2011-28972-C02-01, and PI11/00616 from the Spanish Ministerio de Ciencia e Innovacion, ARTEMIS program (S2009/DPI-1802), from the Comunidad de Madrid

    New method for correcting beam-hardening artifacts in CT images via deep learning

    Get PDF
    Proceedings of the 16th Virtual International Meeting on Fully 3D Image Reconstruction in Radiology and Nuclear Medicine, 19-23 July 2021, Leuven, Belgium.Beam-hardening is the increase of the mean energy of an X-ray beam as it traverses a material. This effect produces two artifacts in the reconstructed image: cupping in homogeneous regions and dark bands among dense areas in heterogeneous regions. The correction methods proposed in the literature can be divided into post-processing and iterative methods. The former methods usually need a bone segmentation, which can fail in low-dose acquisitions, while the latter methods need several projections and reconstructions, increasing the computation time. In this work, we propose a new method for correcting the beamhardening artifacts in CT based on deep learning. A U-Net network was trained with rodent data for two scenarios: standard and low-dose. Results in an independent rodent study showed an optimum correction for both scenarios, similar to that of iterative approaches, but with a reduction of computational time of two orders of magnitude.This work has been supported by project "DEEPCT-CMUC3M", funded by the call "Programa de apoyo a la realización de proyectos interdisciplinares de I+D para jóvenes investigadores de la UC3M 2019-2020, Convenio Plurianual CAM - UC3M" and project "RADCOV19", funded by CRUE Universidades, CSIC and Banco Santander (Fondo Supera). The CNIC is supported by the Ministerio de Ciencia, Innovación y Universidades and the Pro CNIC Foundation, and is a Severo Ochoa Center of Excellence (SEV-2015-0505)

    Accelerated iterative image reconstruction for cone-beam computed tomography through Big Data frameworks

    Get PDF
    One of the latest trends in Computed Tomography (CT) is the reduction of the radiation dose delivered to patients through the decrease of the amount of acquired data. This reduction results in artifacts in the final images if conventional reconstruction methods are used, making it advisable to employ iterative algorithms to enhance image quality. Most approaches are built around two main operators, backprojection and projection, which are computationally expensive. In this work, we present an implementation of those operators for iterative reconstruction methods exploiting the Big Data paradigm. We define an architecture based on Apache Spark that supports both Graphical Processing Units (GPU) and CPU-based architectures. The aforementioned are parallelized using a partitioning scheme based on the division of the volume and irregular data structures in order to reduce the cost of communication and computation of the final images. Our solution accelerates the execution of the two most computational expensive components with Apache Spark, improving the programming experience of new iterative reconstruction algorithms and the maintainability of the source code increasing the level of abstraction for non-experienced high performance programmers. Through an experimental evaluation, we show that we can obtain results up to 10 faster for projection and 21 faster for backprojection when using a GPU-based cluster compared to a traditional multi-core version. Although a linear speed up was not reached, the proposed approach can be a good alternative for porting previous medical image reconstruction applications already implemented in C/C++ or even with CUDA or OpenCL programming models. Our solution enables the automatic detection of the GPU devices and execution on CPU and GPU tasks at the same time under the same system, using all the available resources.This work was supported by the NIH, United States under Grant R01-HL-098686 and Grant U01 EB018753, the Spanish Ministerio de Economia y Competitividad (projects TEC2013-47270-R, RTC-2014-3028 and TIN2016-79637-P), the Spanish Ministerio de Educacion (grant FPU14/03875), the Spanish Ministerio de Ciencia, Innovacion y Universidades (Instituto de Salud Carlos III, project DTS17/00122; Agencia Estatal de Investigacion, project DPI2016-79075-R-AEI/FEDER, UE), co-funded by European Regional Development Fund (ERDF), ‘‘A way of making Europe’’. The CNIC is supported by the Ministerio de Ciencia, Spain, Innovacion y Universidades, Spain and the Pro CNIC Foundation, Spain, and is a Severo Ochoa Center of Excellence, Spain (SEV-2015-0505). Finally, this research was partially supported by Madrid regional Government, Spain under the grant ’’Convergencia Big data-Hpc: de los sensores a las Aplicaciones. (CABAHLA-CM)’’. Ref: S2018/TCS-4423

    Segmentación automática de estudios PET cardíacos con ¹³NH_3 basada en correlación iterativa

    Get PDF
    Actas de: XXVIII Congreso Anual de la Sociedad Española de Ingeniería Biomédica (CASEIB 2010). Madrid, 24-26 de noviembre de 2010.La obtención de la función de entrada en estudios dinámicos de corazón a partir de la imagen PET se realiza habitualmente mediante la selección previa de una región de interés (ROI) o utilizando procedimientos de análisis factorial para encontrar aquellas curvas actividad/tiempo que mejor se adaptan a la función de entrada. En este trabajo se presenta un método novedoso de segmentación automática y obtención de la función de entrada que utiliza mapas de correlación calculados sobre estudios dinámicos que emplean ¹³NH_3 como trazador. Partiendo de un modelo analítico inicial, se buscan las curvas temporales más parecidas en el estudio real empleando la correlación. Tomando como datos estas curvas se calculan nuevos modelos con los que realizar sucesivas iteraciones. El resultado final es tanto una segmentación automática como la curva de actividad/tiempo de cada región segmentada.Ministerio de Ciencia e Innovación, TEC2007-64731, TEC 2008-06715-C02-1, la RETIC-RECAVA del Ministerio de Sanidad y Consumo, y el programa ARTEMIS S2009/DPI-1802 de la Comunidad de Madrid.Publicad

    Investigation of Different Sparsity Transforms for the PICCS Algorithm in Small- Animal Respiratory Gated CT

    Get PDF
    Data Availability Statement: All relevant data are available from the Zenodo database, under the DOI: http://dx.doi.org/10.5281/zenodo.15685.Respiratory gating helps to overcome the problem of breathing motion in cardiothoracic small-animal imaging by acquiring multiple images for each projection angle and then assigning projections to different phases. When this approach is used with a dose similar to that of a static acquisition, a low number of noisy projections are available for the reconstruction of each respiratory phase, thus leading to streak artifacts in the reconstructed images. This problem can be alleviated using a prior image constrained compressed sensing (PICCS) algorithm, which enables accurate reconstruction of highly undersampled data when a prior image is available. We compared variants of the PICCS algorithm with different transforms in the prior penalty function: gradient, unitary, and wavelet transform. In all cases the problem was solved using the Split Bregman approach, which is efficient for convex constrained optimization. The algorithms were evaluated using simulations generated from data previously acquired on a micro-CT scanner following a high-dose protocol (four times the dose of a standard static protocol). The resulting data were used to simulate scenarios with different dose levels and numbers of projections. All compressed sensing methods performed very similarly in terms of noise, spatiotemporal resolution, and streak reduction, and filtered back-projection was greatly improved. Nevertheless, the wavelet domain was found to be less prone to patchy cartoon-like artifacts than the commonly used gradient domain.This work was partially funded by the RICRETIC network (RD12/0042/0057) from the Ministerio de Economía y Competitividad (www.mineco.gob.es/) and projects TEC2010-21619-C04-01 and PI11/00616 from Ministerio de Ciencia e Innovación (www.micinn.es/). The research leading to these results was supported by funding from the Innovative Medicines Initiative (www.imi.europa.eu) Joint Undertaking under grant agreement n°115337, the resources of which comprise financial contributions from the European Union's Seventh Framework Programme (FP7/2007-2013) and EFPIA companies ("in kind contribution"). The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript

    New reconstruction methodology for chest tomosynthesis based on deep learning

    Get PDF
    Proceeding of: 7th International Conference on Image Formation in X-Ray Computed Tomography (ICIFXCT 2022), Baltimore, Maryland, 12-16 June 2022Tomosynthesis offers an alternative to planar radiography providing pseudo-tomographic information at a much lower radiation dose than CT. The fact that it cannot convey information about the density poses a major limitation towards the use of tomosynthesis in chest imaging, due to the wide range of pathologies that present an increase in the density of the pulmonary parenchyma. Previous works have attempted to improve image quality through enhanced analytical, iterative algorithms, or including a deep learning-based step in the reconstruction, but the results shown are still far from the quantitative information of a CT. In this work, we propose a reconstruction methodology consisting of a filtered back-projection step followed by post-processing based on Deep Learning to obtain a tomographic image closer to CT. Preliminary results show the potential of the proposed methodology to obtain true tomographic information from tomosynthesis data, which could replace CT scans in applications where the radiation dose is critical.This work has been supported by Ministerio de Ciencia e Innovación, Agencia Estatal de Investigación: PID2019-110369RB-I00/AEI/10.13039/501100011033 (RADHOR); PDC2021-121656-I00 (MULTIRAD), funded by MCIN/AEI/10.13039/501100011033 and by the European Union 'NextGenerationEU'/PRTR. Also funded by Comunidad de Madrid: Multiannual Agreement with UC3M in the line of 'Fostering Young Doctors Research' (DEEPCT-CM-UC3M), and in the context of the V PRICIT (Regional Programme of Research and Technological Innovation S2017/BMD-3867 RENIM-CM, co-funded by European Structural and Investment Fund. And also partially funded by CRUE Universidades, CSIC and Banco Santander (Fondo Supera Covid19), project RADCOV19 and by Instituto de Salud Carlos III through the project "PT20/00044", cofunded by European Regional Development Fund "A way to make Europe";. The CNIC is supported by Instituto de Salud Carlos III, Ministerio de Ciencia e Innovacióm and the Pro CNIC Foundation. The imaging and associated clinical data downloaded from MIDRC (The Medical Imaging Data Resource Center) and used for research in this publication was made possible by the National Institute of Biomedical Imaging and Bioengineering (NIBIB) of the National Institutes of Health under contracts 75N92020C00008 and 75N92020C00021

    Corrección empírica de primer y segundo orden del artefacto de endurecimiento de haz en imágenes de micro-TAC

    Get PDF
    Actas de: XXIX Congreso Anual de la Sociedad Espñaola de Ingeniería Biomédica (CASEIB 2011). Cáceres, 16-18 Noviembre 2011.Los artefactos más comunes producidos por el fenómeno físico de endurecimiento de haz en imágenes obtenidas en un tomógrafo de rayos X son: "cupping", en volúmenes homogéneos y bandas oscuras, en presencia de objetos densos. Este trabajo presenta un esquema de corrección completa para ambos artefactos: un primer paso implementa una corrección de "cupping" por medio de un método de corrección de primer orden: linealización sobre los datos de proyección; en un segundo paso, se aplica un algoritmo de corrección de segundo orden sobre la imagen ya reconstruida para eliminar las bandas oscuras. En todo el proceso se elimina la necesidad de conocer el espectro de la fuente de rayos X. Ambos métodos han sido validados en maniquíes homogéneos y heterogéneos compuestos por dos materiales distintos, además de estudios de pequeño animal (ratas y ratones de laboratorio) adquiridos en un tomógrafo de rayos X para pequeños animales (micro-TAC) diseñado en el laboratorio. Los resultados demuestran la validez del esquema de corrección.Este trabajo ha sido financiado por el Ministerio de Ciencia e Innovación (proyectos CENIT AMIT, TEC 2008-06715-C02-1, RD07/0014/2009, TRA2009 0175 y Red RECAVA) y por la Comunidad de Madrid y Fondos FEDER (programa ARTEMIS S2009DPI-1802).Publicad
    corecore