502 research outputs found

    Monte-Carlo simulations and image reconstruction for novel imaging scenarios in emission tomography

    Get PDF
    AbstractEmission imaging incorporates both the development of dedicated devices for data acquisition as well as algorithms for recovering images from that data. Emission tomography is an indirect approach to imaging. The effect of device modification on the final image can be understood through both the way in which data are gathered, using simulation, and the way in which the image is formed from that data, or image reconstruction. When developing novel devices, systems and imaging tasks, accurate simulation and image reconstruction allow performance to be estimated, and in some cases optimized, using computational methods before or during the process of physical construction. However, there are a vast range of approaches, algorithms and pre-existing computational tools that can be exploited and the choices made will affect the accuracy of the in silico results and quality of the reconstructed images. On the one hand, should important physical effects be neglected in either the simulation or reconstruction steps, specific enhancements provided by novel devices may not be represented in the results. On the other hand, over-modeling of device characteristics in either step leads to large computational overheads that can confound timely results. Here, a range of simulation methodologies and toolkits are discussed, as well as reconstruction algorithms that may be employed in emission imaging. The relative advantages and disadvantages of a range of options are highlighted using specific examples from current research scenarios

    4-D Tomographic Inference: Application to SPECT and MR-driven PET

    Get PDF
    Emission tomographic imaging is framed in the Bayesian and information theoretic framework. The first part of the thesis is inspired by the new possibilities offered by PET-MR systems, formulating models and algorithms for 4-D tomography and for the integration of information from multiple imaging modalities. The second part of the thesis extends the models described in the first part, focusing on the imaging hardware. Three key aspects for the design of new imaging systems are investigated: criteria and efficient algorithms for the optimisation and real-time adaptation of the parameters of the imaging hardware; learning the characteristics of the imaging hardware; exploiting the rich information provided by depthof- interaction (DOI) and energy resolving devices. The document concludes with the description of the NiftyRec software toolkit, developed to enable 4-D multi-modal tomographic inference

    A GPU-based finite-size pencil beam algorithm with 3D-density correction for radiotherapy dose calculation

    Full text link
    Targeting at the development of an accurate and efficient dose calculation engine for online adaptive radiotherapy, we have implemented a finite size pencil beam (FSPB) algorithm with a 3D-density correction method on GPU. This new GPU-based dose engine is built on our previously published ultrafast FSPB computational framework [Gu et al. Phys. Med. Biol. 54 6287-97, 2009]. Dosimetric evaluations against Monte Carlo dose calculations are conducted on 10 IMRT treatment plans (5 head-and-neck cases and 5 lung cases). For all cases, there is improvement with the 3D-density correction over the conventional FSPB algorithm and for most cases the improvement is significant. Regarding the efficiency, because of the appropriate arrangement of memory access and the usage of GPU intrinsic functions, the dose calculation for an IMRT plan can be accomplished well within 1 second (except for one case) with this new GPU-based FSPB algorithm. Compared to the previous GPU-based FSPB algorithm without 3D-density correction, this new algorithm, though slightly sacrificing the computational efficiency (~5-15% lower), has significantly improved the dose calculation accuracy, making it more suitable for online IMRT replanning

    Evaluation of Single-Chip, Real-Time Tomographic Data Processing on FPGA - SoC Devices

    Get PDF
    A novel approach to tomographic data processing has been developed and evaluated using the Jagiellonian PET (J-PET) scanner as an example. We propose a system in which there is no need for powerful, local to the scanner processing facility, capable to reconstruct images on the fly. Instead we introduce a Field Programmable Gate Array (FPGA) System-on-Chip (SoC) platform connected directly to data streams coming from the scanner, which can perform event building, filtering, coincidence search and Region-Of-Response (ROR) reconstruction by the programmable logic and visualization by the integrated processors. The platform significantly reduces data volume converting raw data to a list-mode representation, while generating visualization on the fly.Comment: IEEE Transactions on Medical Imaging, 17 May 201

    Singular value decomposition analysis of back projection operator of maximum likelihood expectation maximization PET image reconstruction.

    Get PDF
    Background In emission tomography maximum likelihood expectation maximization reconstruction technique has replaced the analytical approaches in several applications. The most important drawback of this iterative method is its linear rate of convergence and the corresponding computational burden. Therefore, simplifications are usually required in the Monte Carlo simulation of the back projection step. In order to overcome these problems, a reconstruction code has been developed with graphical processing unit based Monte Carlo engine which enabled full physical modelling in the back projection. Materials and methods Code performance was evaluated with simulations on two geometries. One is a sophisticated scanner geometry which consists of a dodecagon with inscribed circle radius of 8.7 cm, packed on each side with an array of 39 × 81 LYSO detector pixels of 1.17 mm sided squares, similar to a Mediso nanoScan PET/CT scanner. The other, simplified geometry contains a 38,4mm long interval as a voxel space, detector pixels are assigned in two parallel sections each containing 81 crystals of a size 1.17×1.17 mm. Results We have demonstrated that full Monte Carlo modelling in the back projection step leads to material dependent inhomogeneities in the reconstructed image. The reasons behind this apparently anomalous behaviour was analysed in the simplified system by means of singular value decomposition and explained by different speed of convergence. Conclusions To still take advantage of the higher noise stability of the full physical modelling, a new filtering technique is proposed for convergence acceleration. Some theoretical considerations for the practical implementation and for further development are also presented

    Development of tools for quality control on therapeutic carbon beams with a fast-MC code (FRED)

    Get PDF
    In the fight against tumors, different types of cancer require different ways of treatment: surgery, radiotherapy, chemotherapy, hormone therapy and immunotherapy often used in combination with each other. About 50% of cancer patients undergo radiotherapy treatment which exploits the ability of ionizing radiation to damage the genetic heritage of cancer cells, causing apoptosis and preventing their reproduction. The non-invasive nature of radiation represents a viable alternative for those tumors that are not surgically operable because they are localized in hardly reachable anatomical sites or on organs which removal would be too disabling for the patient. A new frontier of radiotherapy is represented by Particle Therapy (PT). It consists of the use of accelerated charged particle beams (in particular protons and carbon ions) to irradiate solid tumors. The main advantage of such a technique with respect to the standard radiotherapy using x-rays/electron beams is in the different longitudinal energy release profiles. While photons’ longitudinal dose release is characterized by a slow exponential decrease, for charged particles a sharp peak at the end of the path provides a more selective energy release. By conveniently controlling the peak position it is possible to concentrate the dose (expressed as the energy release per unit mass) to tumors and, at the same time, preserve surrounding healthy tissues. In particle therapy treatments, the achieved steep dose gradients demand highly accurate modelling of the interaction of beam particles with tissues. The high ballistic precision of hadrons may result in a superior delivered dose distribution compared to conventional radiotherapy only if accompanied by a precise patient positioning and highly accurate treatment planning. This second operation is performed by the Treatment Planning System (TPS), sophisticated software that provides position, intensity and direction of the beams to the accelerator control system. Nowadays one of the major issues related to the TPS based on Monte Carlo (MC) is the high computational time required to meet the demand for high accuracy. The code FRED (Fast paRticle thErapy Dose evaluator) has been developed to allow a fast optimization of treatment plans in proton therapy while profiting from the dose release accuracy of a MC tool. Within FRED, the proton interactions are described with the precision level available in leading-edge MC tools used for medical physics applications, with the advantage of reducing the simulation time up to a factor of 1000. In this way, it allows a MC plan recalculation in a few minutes on GPU (Graphics Processing Unit) cards, instead of several hours on CPU (Central Processing Unit) hardware. For the exceptional speed of the proton tracking algorithms implemented in FRED and for the excellent results achieved, the door to several applications within the particle therapy field has been opened. In particular, the success of FRED with protons determined the interest of CNAO (Centro Nazionale di Adroterapia Oncologica) center in Pavia to develop FRED also for carbon therapy applications, to recalculate treatment plans with carbon ions. Among the several differences between proton and carbon beams, the nuclear fragmentation of the projectile in a 12C treatment, which does not occur with protons, is certainly the most important. The simulation of the ion beam fragmentation gives an important contribution to the dose deposition. The total dose released is due not only to the primary beam but also to secondary and tertiary particles. Also for proton beams, there are secondary particles, mostly secondary protons from target fragmentation, which contribute on the level of some percent to the dose deposition for higher proton beam energies. However, fragments of the projectile, produced only by carbon beams, having on average the same energy per nucleon of the primary beam and a lower mass, can release dose after the peak causing the well-known fragmentation tail. This thesis is focused on the development of a fast-MC simulating the carbon treatment in particle therapy, with an entirely new nuclear interaction model of carbon on light target nuclei. The model has been developed to be implemented in the GPU based MC code, FRED. For this reason, in developing the algorithms the goal has been to balance accuracy, calculation time and GPU execution guidelines. In particular, maximum attention has been given to physical processes relevant for dose and RBE-weighted dose computation. Moreover, where possible, look-up tables have been implemented instead of performing an explicit calculation in view of the GPU implementation. Some aspects of the interaction of carbon ions with matter are analogous to the ones already used in FRED for proton beams. In particular, for ionization energy loss and multiple scattering, only a few adjustments were necessary. On the contrary, the nuclear model was built from scratch. The approach has been to develop the nuclear model parameterizing existent data and applying physical scaling in the energy range where the data are missing. The elastic cross-section has been obtained from ENDF/B-VII data while the calculation of the non-elastic cross-section was based on results reported on Tacheki, Zhang and Kox papers. Data used for the sampling of the combination of emitted fragments, energy and angle distributions, are relatives to the Dudouet and Divay experiments. To fill the gaps in the experimental data, an intercomparison between FRED and the full-MC FLUKA has been of help to check the adopted scaling. The model has been tested against the full-MC code FLUKA, commonly used in particle therapy, and then with two of the few experiments that it is possible to find in literature. The agreement with FLUKA is excellent, especially for lower energies

    Simulación de rango del positrón y emisiones gamma adicionales en PET

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Ciencias Físicas, Departamento de Física Atómica, Molecular y Nuclear, leída el 03-04-2014Depto. de Estructura de la Materia, Física Térmica y ElectrónicaFac. de Ciencias FísicasTRUEunpu
    corecore