1,688 research outputs found

    Algorithms for enhanced artifact reduction and material recognition in computed tomography

    Full text link
    Computed tomography (CT) imaging provides a non-destructive means to examine the interior of an object which is a valuable tool in medical and security applications. The variety of materials seen in the security applications is higher than in the medical applications. Factors such as clutter, presence of dense objects, and closely placed items in a bag or a parcel add to the difficulty of the material recognition in security applications. Metal and dense objects create image artifacts which degrade the image quality and deteriorate the recognition accuracy. Conventional CT machines scan the object using single source or dual source spectra and reconstruct the effective linear attenuation coefficient of voxels in the image which may not provide the sufficient information to identify the occupying materials. In this dissertation, we provide algorithmic solutions to enhance CT material recognition. We provide a set of algorithms to accommodate different classes of CT machines. First, we provide a metal artifact reduction algorithm for conventional CT machines which perform the measurements using single X-ray source spectrum. Compared to previous methods, our algorithm is robust to severe metal artifacts and accurately reconstructs the regions that are in proximity to metal. Second, we propose a novel joint segmentation and classification algorithm for dual-energy CT machines which extends prior work to capture spatial correlation in material X-ray attenuation properties. We show that the classification performance of our method surpasses the prior work's result. Third, we propose a new framework for reconstruction and classification using a new class of CT machines known as spectral CT which has been recently developed. Spectral CT uses multiple energy windows to scan the object, thus it captures data across higher energy dimensions per detector. Our reconstruction algorithm extracts essential features from the measured data by using spectral decomposition. We explore the effect of using different transforms in performing the measurement decomposition and we develop a new basis transform which encapsulates the sufficient information of the data and provides high classification accuracy. Furthermore, we extend our framework to perform the task of explosive detection. We show that our framework achieves high detection accuracy and it is robust to noise and variations. Lastly, we propose a combined algorithm for spectral CT, which jointly reconstructs images and labels each region in the image. We offer a tractable optimization method to solve the proposed discrete tomography problem. We show that our method outperforms the prior work in terms of both reconstruction quality and classification accuracy

    A Review of Automated Image Understanding within 3D Baggage Computed Tomography Security Screening

    Get PDF
    Baggage inspection is the principal safeguard against the transportation of prohibited and potentially dangerous materials at airport security checkpoints. Although traditionally performed by 2D X-ray based scanning, increasingly stringent security regulations have led to a growing demand for more advanced imaging technologies. The role of X-ray Computed Tomography is thus rapidly expanding beyond the traditional materials-based detection of explosives. The development of computer vision and image processing techniques for the automated understanding of 3D baggage-CT imagery is however, complicated by poor image resolutions, image clutter and high levels of noise and artefacts. We discuss the recent and most pertinent advancements and identify topics for future research within the challenging domain of automated image understanding for baggage security screening CT

    Inter-crystal scatter in positron emission tomography: Identification techniques and effects on reconstructed images for AX-PET demonstrator

    Get PDF
    La PET es una técnica de imagen en medicina nuclear que permite la visualización in-vivo y en 3D de procesos funcionales en seres vivos. Un escáner PET mide los rayos gamma producidos al aniquilarse un positrón, el cual es emitido por un radioisótopo inyectado al paciente. La eficiencia del sistema es una característica crucial de los escáneres PET de alta resolución dedicados a la imagen del cerebro o de animales pequeños con el fin de obtener una imagen más fiel o de reducir la actividad del radiotrazador, y por consiguiente, la dosis inyectada al paciente. El objetivo de este trabajo de investigación es mejorar la eficiencia y calidad de imagen de un prototipo de escáner PET axial (AX-PET) sin comprometer la resolución espacial. El escáner AX-PET está diseñado para imagen del cerebro humano y consta de varios pisos de cristales centelleadores largos y finos, orientados axialmente, que son leídos individualmente por un fotomultiplicador de silicio. El diseño del detector permite la adquisición de eventos en los que un rayo gamma sufre múltiples interacciones en diferentes cristales: eventos de dispersión inter-cristal (ICS). A diferencia de los eventos más convencionales con una sola interacción (Golden), los eventos ICS son ambiguos debido al desconocimiento de la secuencia de interacción. Por ello, en esta investigación desarrollamos estrategias para la inclusión e identificación de eventos ICS para reconstrucción de imagen y evaluamos el impacto en la eficiencia del sistema y calidad de imagen. Diferentes algoritmos son empleados para seleccionar la primera interacción en un evento ICS basándose en cinemática Compton, sección eficaz de Klein-Nishina, etc., cada cual con una determinada tasa de identificación. Su rendimiento es analizado en base a imágenes reconstruidas de una fuente puntual y tres maniquíes diferentes a través de varias figuras de mérito como coeficiente de recuperación, relación contraste-ruido, visibilidad, etc. El análisis de datos muestra una contribución estadísticamente significante de eventos ICS a la eficiencia del sistema: la sensitividad mejora entre un 25% y 80% con respecto a sólo eventos Golden dependiendo del subtipo de ICS seleccionados para la reconstrucción. Los resultados de la inclusión de coincidencias ICS revelan el incremento de la relación señal-ruido y contraste-ruido, pero una ligera reducción de la resolución espacial incluso para el mejor algoritmo de identificación. En conclusión, el uso de eventos ICS para reconstrucción de imagen es prometedor para medidas de baja actividad (baja estadística), dado que aumenta significativamente la eficiencia del sistema y mejora la calidad de imagen sin perjuicio severo a la resolución espacial.Positron Emission Tomography (PET) is a nuclear medicine imaging technique that allows in-vivo 3D visualization of functional processes of the body. A PET scanner measures the gamma rays produced during the annihilation of a positron, which is emitted from a radioisotope injected to the patient. System efficiency is a crucial feature of high resolution PET scanners aimed at brain or small animal imaging in order to obtain a more faithful image or reduce the radiotracer activity, hence dose, injected to the patient. The aim of this research work is to improve the efficiency and image quality of an Axial PET scanner prototype (AX-PET) without jeopardizing spatial resolution. The AX-PET scanner is designed for human brain imaging and is based on several layers of long, thin, axially arranged scintillator crystals, which are individually readout by Silicon Photo Multipliers. The detector's design allows acquisition of events in which a gamma ray has multiple interactions in different crystals: inter-crystal scatter (ICS) events. In contrast with more standard single-hit (or Golden) events, ICS events are ambiguous as the interaction sequence is unknown. Therefore, in this investigation we develop strategies for the inclusion and identification of ICS events for image reconstruction and assess the impact on system efficiency and image quality. Different algorithms are used to select the first interaction in an ICS event based on Compton kinematics, Klein-Nishina cross section, etc., each with a certain identification rate. Their performance is analysed on the resulting reconstructed images of a point source and three different phantoms through several figures of merit such as recovery coefficient, contrast to noise ratio, visibility, etc. The data analysis shows a statistically significant contribution of ICS events to system efficiency: a sensitivity improvement between 25% and 80% in comparison with only Golden events depending on the ICS subtypes selected for the reconstruction. The results of the inclusion of ICS coincidences reveal an increase in signal and contrast to noise ratio, but a slight decrease of the spatial resolution even for the best identification algorithm. In conclusion, the use of ICS events for image reconstruction is promising for low activity measurements (low statistics), as it significantly increases the system efficiency and improves image quality without a serious decrease in spatial resolution

    Relevance of accurate Monte Carlo modeling in nuclear medical imaging

    Get PDF
    Monte Carlo techniques have become popular in different areas of medical physics with advantage of powerful computing systems. In particular, they have been extensively applied to simulate processes involving random behavior and to quantify physical parameters that are difficult or even impossible to calculate by experimental measurements. Recent nuclear medical imaging innovations such as single-photon emission computed tomography (SPECT), positron emission tomography (PET), and multiple emission tomography (MET) are ideal for Monte Carlo modeling techniques because of the stochastic nature of radiation emission, transport and detection processes. Factors which have contributed to the wider use include improved models of radiation transport processes, the practicality of application with the development of acceleration schemes and the improved speed of computers. This paper presents derivation and methodological basis for this approach and critically reviews their areas of application in nuclear imaging. An overview of existing simulation programs is provided and illustrated with examples of some useful features of such sophisticated tools in connection with common computing facilities and more powerful multiple-processor parallel processing systems. Current and future trends in the field are also discussed

    Characterization of multiphase flows integrating X-ray imaging and virtual reality

    Get PDF
    Multiphase flows are used in a wide variety of industries, from energy production to pharmaceutical manufacturing. However, because of the complexity of the flows and difficulty measuring them, it is challenging to characterize the phenomena inside a multiphase flow. To help overcome this challenge, researchers have used numerous types of noninvasive measurement techniques to record the phenomena that occur inside the flow. One technique that has shown much success is X-ray imaging. While capable of high spatial resolutions, X-ray imaging generally has poor temporal resolution. This research improves the characterization of multiphase flows in three ways. First, an X-ray image intensifier is modified to use a high-speed camera to push the temporal limits of what is possible with current tube source X-ray imaging technology. Using this system, sample flows were imaged at 1000 frames per second without a reduction in spatial resolution. Next, the sensitivity of X-ray computed tomography (CT) measurements to changes in acquisition parameters is analyzed. While in theory CT measurements should be stable over a range of acquisition parameters, previous research has indicated otherwise. The analysis of this sensitivity shows that, while raw CT values are strongly affected by changes to acquisition parameters, if proper calibration techniques are used, acquisition parameters do not significantly influence the results for multiphase flow imaging. Finally, two algorithms are analyzed for their suitability to reconstruct an approximate tomographic slice from only two X-ray projections. These algorithms increase the spatial error in the measurement, as compared to traditional CT; however, they allow for very high temporal resolutions for 3D imaging. The only limit on the speed of this measurement technique is the image intensifier-camera setup, which was shown to be capable of imaging at a rate of at least 1000 FPS. While advances in measurement techniques for multiphase flows are one part of improving multiphase flow characterization, the challenge extends beyond measurement techniques. For improved measurement techniques to be useful, the data must be accessible to scientists in a way that maximizes the comprehension of the phenomena. To this end, this work also presents a system for using the Microsoft Kinect sensor to provide natural, non-contact interaction with multiphase flow data. Furthermore, this system is constructed so that it is trivial to add natural, non-contact interaction to immersive visualization applications. Therefore, multiple visualization applications can be built that are optimized to specific types of data, but all leverage the same natural interaction. Finally, the research is concluded by proposing a system that integrates the improved X-ray measurements, with the Kinect interaction system, and a CAVE automatic virtual environment (CAVE) to present scientists with the multiphase flow measurements in an intuitive and inherently three-dimensional manner

    Acceleration of GATE Monte Carlo simulations

    Get PDF
    Positron Emission Tomography (PET) and Single Photon Emission Computed Tomography are forms of medical imaging that produce functional images that reflect biological processes. They are based on the tracer principle. A biologically active substance, a pharmaceutical, is selected so that its spatial and temporal distribution in the body reflects a certain body function or metabolism. In order to form images of the distribution, the pharmaceutical is labeled with gamma-ray-emitting or positron-emitting radionuclides (radiopharmaceuticals or tracers). After administration of the tracer to a patient, an external position-sensitive gamma-ray camera can detect the emitted radiation to form a stack of images of the radionuclide distribution after a reconstruction process. Monte Carlo methods are numerical methods that use random numbers to compute quantities of interest. This is normally done by creating a random variable whose expected value is the desired quantity. One then simulates and tabulates the random variable and uses its sample mean and variance to construct probabilistic estimates. It represents an attempt to model nature through direct simulation of the essential dynamics of the system in question. Monte Carlo modeling is the method of choice for all applications where measurements are not feasible or where analytic models are not available due to the complex nature of the problem. In addition, such modeling is a practical approach in nuclear medical imaging in several important application fields: detector design, quantification, correction methods for image degradations, detection tasks etc. Several powerful dedicated Monte Carlo simulators for PET and/or SPECT are available. However, they are often not detailed nor flexible enough to enable realistic simulations of emission tomography detector geometries while also modeling time dependent processes such as decay, tracer kinetics, patient and bed motion, dead time or detector orbits. Our Monte Carlo simulator of choice, GEANT4 Application for Tomographic Emission (GATE), was specifically designed to address all these issues. The flexibility of GATE comes at a price however. The simulation of a simple prototype SPECT detector may be feasible within hours in GATE but an acquisition with a realistic phantom may take years to complete on a single CPU. In this dissertation we therefore focus on the Achilles’ heel of GATE: efficiency. Acceleration of GATE simulations can only be achieved through a combination of efficient data analysis, dedicated variance reduction techniques, fast navigation algorithms and parallelization. In the first part of this dissertation we consider the improvement of the analysis capabilities of GATE. The static analysis module in GATE is both inflexible and incapable of storing more detail without introducing a large computational overhead. However, the design and validation of the acceleration techniques in this dissertation requires a flexible, detailed and computationally efficient analysis module. To this end, we develop a new analysis framework capable of analyzing any process, from the decay of isotopes to particle interactions and detections in any detector element for any type of phantom. The evaluation of our framework consists of the assessment of spurious activity in 124I-Bexxar PET and of contamination in 131I-Bexxar SPECT. In the case of PET we describe how our framework can detect spurious coincidences generated by non-pure isotopes, even with realistic phantoms. We show that optimized energy thresholds, which can readily be applied in the clinic, can now be derived in order to minimize the contamination. We also show that the spurious activity itself is not spatially uniform. Therefore standard reconstruction and correction techniques are not adequate. In the case of SPECT we describe how it is now possible to classify detections into geometric detections, phantom scatter, penetration through the collimator, collimator scatter and backscatter in the end parts. We show that standard correction algorithms such as triple energy window correction cannot correct for septal penetration. We demonstrate that 124I PET with optimized energy thresholds offer better image quality than 131I SPECT when using standard reconstruction techniques. In the second part of this dissertation we focus on improving the efficiency of GATE with a variance reduction technique called Geometrical Importance Sampling (GIS). We describe how only 0.02% of all emitted photons can reach the crystal surface of a SPECT detector head with a low energy high resolution collimator. A lot of computing power is therefore wasted by tracking photons that will not contribute to the result. A twofold strategy is used to solve this problem: GIS employs Russian Roulette to discard those photons that will not likely contribute to the result. Photons in more important regions on the other hand are split into several photons with reduced weight to increase their survival chance. We show that this technique introduces branches into the particle history. We describe how this can be taken into account by a particle history tree that is used for the analysis of the results. The evaluation of GIS consists of energy spectra validation, spatial resolution and sensitivity for low and medium energy isotopes. We show that GIS reaches acceleration factors between 5 and 13 over analog GATE simulations for the isotopes in the study. It is a general acceleration technique that can be used for any isotope, phantom and detector combination. Although GIS is useful as a safe and accurate acceleration technique, it cannot deliver clinically acceptable simulation times. The main reason lies in its inability to force photons in a specific direction. In the third part of this dissertation we solve this problem for 99mTc SPECT simulations. Our approach is twofold. Firstly, we introduce two variance reduction techniques: forced detection (FD) and convolution-based forced detection (CFD) with multiple projection sampling (MPS). FD and CFD force copies of photons at decay and at every interaction point to be transported through the phantom in a direction sampled within a solid angle toward the SPECT detector head at all SPECT angles simultaneously. We describe how a weight must be assigned to each photon in order to compensate for the forced direction and non-absorption at emission and scatter. We show how the weights are calculated from the total and differential Compton and Rayleigh cross sections per electron with incorporation of Hubbell’s atomic form factor. In the case of FD all detector interactions are modeled by Monte Carlo, while in the case of CFD the detector is modeled analytically. Secondly, we describe the design of an FD and CFD specialized navigator to accelerate the slow tracking algorithms in GEANT4. The validation study shows that both FD and CFD closely match the analog GATE simulations and that we can obtain an acceleration factor between 3 (FD) and 6 (CFD) orders of magnitude over analog simulations. This allows for the simulation of a realistic acquisition with a torso phantom within 130 seconds. In the fourth part of this dissertation we exploit the intrinsic parallel nature of Monte Carlo simulations. We show how Monte Carlo simulations should scale linearly as a function of the number of processing nodes but that this is usually not achieved due to job setup time, output handling and cluster overhead. We describe how our approach is based on two steps: job distribution and output data handling. The job distribution is based on a time-domain partitioning scheme that retains all experimental parameters and that guarantees the statistical independence of each subsimulation. We also reduce the job setup time by the introduction of a parameterized collimator model for SPECT simulations. We reduce the data output handling time by a chain-based output merger. The scalability study is based on a set of simulations on a 70 CPU cluster and shows an acceleration factor of approximately 66 on 70 CPUs for both PET and SPECT.We also show that our method of parallelization does not introduce any approximations and that it can be readily combined with any of the previous acceleration techniques described above
    corecore