158 research outputs found

    Processing optimization with parallel computing for the J-PET tomography scanner

    Get PDF
    The Jagiellonian-PET (J-PET) collaboration is developing a prototype TOF-PET detector based on long polymer scintillators. This novel approach exploits the excellent time properties of the plastic scintillators, which permit very precise time measurements. The very fast, FPGA-based front-end electronics and the data acquisition system, as well as, low- and high-level reconstruction algorithms were specially developed to be used with the J-PET scanner. The TOF-PET data processing and reconstruction are time and resource demanding operations, especially in case of a large acceptance detector, which works in triggerless data acquisition mode. In this article, we discuss the parallel computing methods applied to optimize the data processing for the J-PET detector. We begin with general concepts of parallel computing and then we discuss several applications of those techniques in the J-PET data processing.Comment: 8 page

    Study of CT Images Processing with the Implementation of MLEM Algorithm using CUDA on NVIDIA’S GPU Framework

    Get PDF
    In medicine, the acquisition process in Computed Tomography Images (CT) is obtained by a reconstruction algorithm. The classical method for image reconstruction is the Filtered Back Projection (FBP). This method is fast and simple but does not use any statistical information about the measurements. The appearance of artifacts and its low spatial resolution in reconstructed images must be considered. Furthermore, the FBP requires of optimal conditions of the projections and complete sets of data. In this paper a methodology to accelerate acquisition process for CT based on the Maximum Likelihood Estimation Method (MLEM) algorithm is presented. This statistical iterative reconstruction algorithm uses a GPU Programming Paradigms and was compared with sequential algorithms in which the reconstruction time was reduced by up to 3 orders of magnitude while preserving image quality. Furthermore, they showed a good performance when compared with reconstruction methods provided by commercial software. The system, which would consist exclusively of a commercial laptop and GPU could be used as a fast, portable, simple and cheap image reconstruction platform in the future

    Evaluation of Single-Chip, Real-Time Tomographic Data Processing on FPGA - SoC Devices

    Get PDF
    A novel approach to tomographic data processing has been developed and evaluated using the Jagiellonian PET (J-PET) scanner as an example. We propose a system in which there is no need for powerful, local to the scanner processing facility, capable to reconstruct images on the fly. Instead we introduce a Field Programmable Gate Array (FPGA) System-on-Chip (SoC) platform connected directly to data streams coming from the scanner, which can perform event building, filtering, coincidence search and Region-Of-Response (ROR) reconstruction by the programmable logic and visualization by the integrated processors. The platform significantly reduces data volume converting raw data to a list-mode representation, while generating visualization on the fly.Comment: IEEE Transactions on Medical Imaging, 17 May 201

    Comparison of different image reconstruction algorithms for Digital Breast Tomosynthesis and assessment of their potential to reduce radiation dose

    Get PDF
    Tese de mestrado, Engenharia Física, 2022, Universidade de Lisboa, Faculdade de CiênciasDigital Breast Tomosynthesis is a three-dimensional medical imaging technique that allows the view of sectional parts of the breast. Obtaining multiple slices of the breast constitutes an advantage in contrast to conventional mammography examination in view of the increased potential in breast cancer detectability. Conventional mammography, despite being a screening success, has undesirable specificity, sensitivity, and high recall rates owing to the overlapping of tissues. Although this new technique promises better diagnostic results, the acquisition methods and image reconstruction algorithms are still under research. Several articles suggest the use of analytic algorithms. However, more recent articles highlight the iterative algorithm’s potential for increasing image quality when compared to the former. The scope of this dissertation was to test the hypothesis of achieving higher quality images using iterative algorithms acquired with lower doses than those using analytic algorithms. In a first stage, the open-source Tomographic Iterative GPU-based Reconstruction (TIGRE) Toolbox for fast and accurate 3D x-ray image reconstruction was used to reconstruct the images acquired using an acrylic phantom. The algorithms used from the toolbox were the Feldkamp, Davis, and Kress, the Simultaneous Algebraic Reconstruction Technique, and the Maximum Likelihood Expectation Maximization algorithm. In a second and final state, the possibility of further reducing the radiation dose using image postprocessing tools was evaluated. A Total Variation Minimization filter was applied to the images reconstructed with the TIGRE toolbox algorithm that provided the best image quality. These were then compared to the images of the commercial unit used for the image acquisitions. With the use of image quality parameters, it was found that the Maximum Likelihood Expectation Maximization algorithm performance was the best of the three for lower radiation doses, especially with the filter. In sum, the result showed the potential of the algorithm in obtaining images with quality for low doses

    4-D Tomographic Inference: Application to SPECT and MR-driven PET

    Get PDF
    Emission tomographic imaging is framed in the Bayesian and information theoretic framework. The first part of the thesis is inspired by the new possibilities offered by PET-MR systems, formulating models and algorithms for 4-D tomography and for the integration of information from multiple imaging modalities. The second part of the thesis extends the models described in the first part, focusing on the imaging hardware. Three key aspects for the design of new imaging systems are investigated: criteria and efficient algorithms for the optimisation and real-time adaptation of the parameters of the imaging hardware; learning the characteristics of the imaging hardware; exploiting the rich information provided by depthof- interaction (DOI) and energy resolving devices. The document concludes with the description of the NiftyRec software toolkit, developed to enable 4-D multi-modal tomographic inference

    Compressed Sensing for Few-View Multi-Pinhole Spect with Applications to Preclinical Imaging

    Get PDF
    Single Photon Emission Computed Tomography (SPECT) can be used to identify and quantify changes in molecular and cellular targets involved in disease. A radiopharmaceutical that targets a specific metabolic function is administered to a subject and planar projections are formed by imaging emissions at different view angles around the subject. The reconstruction task is to determine the distribution of radioactivity within the subject from the projections. We present a reconstruction approach that utilizes only a few view angles, resulting in a highly underdetermined system, which could be used for dynamic imaging applications designed to quantify physiologic processes altered with disease. We developed an approach to solving the underdetermined problem that incorporates a fast matrix- based multi-pinhole projection model into a primal-dual algorithm (Chambolle-Pock), tailored to perform penalized data fidelity minimization using the reconstruction’s total variation as a sparse regularizer. The resulting algorithm was implemented on a Graphics Processing Unit (GPU), and validated by solving an idealized quadratic problem. Simulated noisy data from a digital rat thorax phantom was reconstructed using a range of regularizing parameters and primal-dual scale factors to control the smoothness of the reconstruction and the step-size in the iterative algorithm, respectively. The approach was characterized by evaluating data fidelity, convergence, and noise properties. The proposed approach was then applied to few-view experimental data obtained in a preclinical SPECT study. 99mTc-labeled macroaggregated albumin (MAA) that accumulates in the lung was administered to a rat and multi-pinhole image data was acquired and reconstructed. The results demonstrate MAA uptake in the lungs is accurately quantified over a wide range of activity levels using as few as three view angles. In a pilot experiment, we also showed using 15 and 60 view angles that uptake of 99mTc-hexamethylpropyleneamineoxime in hyperoxia-exposed rats is higher than that in control rats, consistent with previous studies in our laboratory. Overall these experiments demonstrate the potential utility of the proposed method for few-view three-dimensional reconstruction of SPECT data for dynamic preclinical studies
    corecore