11 research outputs found

    Fast GPU-Based Approach to Branchless Distance-Driven Projection and Back-Projection in Cone Beam CT

    Get PDF
    Modern CT image reconstruction algorithms rely on projection and back-projection operations to refine an image estimate in iterative image reconstruction. A widely-used state-of-the-art technique is distance-driven projection and back-projection. While the distance-driven technique yields superior image quality in iterative algorithms, it is a computationally demanding process. This has a detrimental effect on the relevance of the algorithms in clinical settings. A few methods have been proposed for enhancing the distance-driven technique in order to take advantage of modern computer hardware. This study explores a two-dimensional extension of the branchless method, which is a technique that does not compromise image quality. The extension of the branchless method is named “pre-projection integration” because it gets a performance boost by integrating the data before the projection and back-projection operations. It was written with Nvidia’s CUDA framework and carefully designed for massively parallel graphics processing units (GPUs). The performance and the image quality of the pre-projection integration method were analyzed. Both projection and back-projection are significantly faster with pre-projection integration. The image quality was analyzed using cone beam CT image reconstruction algorithms within Jeffrey Fessler’s Image Reconstruction Toolbox. Images produced from regularized, iterative image reconstruction algorithms using the pre-projection integration method show no significant artifacts

    Automatic alignment for three-dimensional tomographic reconstruction

    Get PDF
    In tomographic reconstruction, the goal is to reconstruct an unknown object from a collection of line integrals. Given a complete sampling of such line integrals for various angles and directions, explicit inverse formulas exist to reconstruct the object. Given noisy and incomplete measurements, the inverse problem is typically solved through a regularized least-squares approach. A challenge for both approaches is that in practice the exact directions and offsets of the x-rays are only known approximately due to, e.g. calibration errors. Such errors lead to artifacts in the reconstructed image. In the case of sufficient sampling and geometrically simple misalignment, the measurements can be corrected by exploiting so-called consistency conditions. In other cases, such conditions may not apply and we have to solve an additional inverse problem to retrieve the angles and shifts. In this paper we propose a general algorithmic framework for retrieving these parameters in conjunction with an algebraic reconstruction technique. The proposed approach is illustrated by numerical examples for both simulated data and an electron tomography dataset

    A geometric partitioning method for distributed tomographic reconstruction

    Get PDF
    Tomography is a powerful technique for 3D imaging of the interior of an object. With the growing sizes of typical tomographic data sets, the computational requirements for algorithms in tomography are rapidly increasing. Parallel and distributed-memory methods for tomographic reconstruction are therefore becoming increasingly common. An underexposed aspect is the effect of the data distribution on the performance of distributed-memory reconstruction algorithms. In this work, we introduce a geometric partitioning method, which takes into account the acquisition geometry and aims to minimize the necessary communication between nodes for distributed-memory forward projection and back projection operations. These operations are crucial subroutines for an important class of reconstruction methods. We show that the choice of data distribution has a significant impact on the runtime of these methods. With our novel partitioning method we reduce the communication volume drastically compared to straightforward distributions, by up to 90% for a number of cases, and furthermore we guarantee a specified load balance

    High-Speed Reconstruction of Low-Dose CT Using Iterative Techniques for Image-Guided Interventions

    Get PDF
    Minimally invasive image-guided interventions(IGIs) lead to improved treatment outcomes while significantly reducing patient trauma. Because of features such as fast scanning, high resolution, three-dimensional view and ease of operation, Computed-Tomography(CT) is increasingly the choice for IGIs. The risk of radiation exposure, however, limits its current and future use. We perform ultra low-dose scanning to overcome this limitation. To address the image quality problem at low doses, we reconstruct images using the iterative Paraboloidal Surrogate(PS) algorithm. Using actual scanner data, we demonstrate improvement in the quality of reconstructed images using the iterative algorithm at low doses as compared to the standard Filtered Back Projection(FBP) technique. We also accelerate the PS algorithm on a cluster of 32 processors and a GPU. We demonstrate approximately 20 times speedup for the cluster and two orders of improvement in speed for the GPU, while maintaining comparable image quality to the traditional uni-processor implementation

    Ultrasound Tomography for control of Batch Crystallization

    Get PDF

    Image reconstruction and processing for stationary digital tomosynthesis systems

    Get PDF
    Digital tomosynthesis (DTS) is an emerging x-ray imaging technique for disease and cancer screening. DTS takes a small number of x-ray projections to generate pseudo-3D images, it has a lower radiation and a lower cost compared to the Computed Tomography (CT) and an improved diagnostic accuracy compared to the 2D radiography. Our research group has developed a carbon nanotube (CNT) based x-ray source. This technology enables packing multiple x-ray sources into one single x-ray source array. Based on this technology, our group built several stationary digital tomosynthesis (s-DTS) systems, which have a faster scanning time and no source motion blur. One critical step in both tomosynthesis and CT is image reconstruction, which generates a 3D image from the 2D measurement. For tomosynthesis, the conventional reconstruction method runs fast but fails in image quality. A better iterative method exists, however, it is too time-consuming to be used in clinics. The goal of this work is to develop fast iterative image reconstruction algorithm and other image processing techniques for the stationary digital tomosynthesis system, improving the image quality affected by the hardware limitation. Fast iterative reconstruction algorithm, named adapted fan volume reconstruction (AFVR), was developed for the s-DTS. AFVR is shown to be an order of magnitude faster than the current iterative reconstruction algorithms and produces better images over the classical filtered back projection (FBP) method. AFVR was implemented for the stationary digital breast tomosynthesis system (s-DBT), the stationary digital chest tomosynthesis system (s-DCT) and the stationary intraoral dental tomosynthesis system (s-IOT). Next, scatter correction technique for stationary digital tomosynthesis was investigated. A new algorithm for estimating scatter profile was developed, which has been shown to improve the image quality substantially. Finally, the quantitative imaging was investigated, where the s-DCT system was used to assess the coronary artery calcium score.Doctor of Philosoph

    High-performance and hardware-aware computing: proceedings of the first International Workshop on New Frontiers in High-performance and Hardware-aware Computing (HipHaC\u2708)

    Get PDF
    The HipHaC workshop aims at combining new aspects of parallel, heterogeneous, and reconfigurable microprocessor technologies with concepts of high-performance computing and, particularly, numerical solution methods. Compute- and memory-intensive applications can only benefit from the full hardware potential if all features on all levels are taken into account in a holistic approach
    corecore