10 research outputs found

    Acceleration of GATE Monte Carlo simulations

    Get PDF
    Positron Emission Tomography (PET) and Single Photon Emission Computed Tomography are forms of medical imaging that produce functional images that reflect biological processes. They are based on the tracer principle. A biologically active substance, a pharmaceutical, is selected so that its spatial and temporal distribution in the body reflects a certain body function or metabolism. In order to form images of the distribution, the pharmaceutical is labeled with gamma-ray-emitting or positron-emitting radionuclides (radiopharmaceuticals or tracers). After administration of the tracer to a patient, an external position-sensitive gamma-ray camera can detect the emitted radiation to form a stack of images of the radionuclide distribution after a reconstruction process. Monte Carlo methods are numerical methods that use random numbers to compute quantities of interest. This is normally done by creating a random variable whose expected value is the desired quantity. One then simulates and tabulates the random variable and uses its sample mean and variance to construct probabilistic estimates. It represents an attempt to model nature through direct simulation of the essential dynamics of the system in question. Monte Carlo modeling is the method of choice for all applications where measurements are not feasible or where analytic models are not available due to the complex nature of the problem. In addition, such modeling is a practical approach in nuclear medical imaging in several important application fields: detector design, quantification, correction methods for image degradations, detection tasks etc. Several powerful dedicated Monte Carlo simulators for PET and/or SPECT are available. However, they are often not detailed nor flexible enough to enable realistic simulations of emission tomography detector geometries while also modeling time dependent processes such as decay, tracer kinetics, patient and bed motion, dead time or detector orbits. Our Monte Carlo simulator of choice, GEANT4 Application for Tomographic Emission (GATE), was specifically designed to address all these issues. The flexibility of GATE comes at a price however. The simulation of a simple prototype SPECT detector may be feasible within hours in GATE but an acquisition with a realistic phantom may take years to complete on a single CPU. In this dissertation we therefore focus on the Achilles’ heel of GATE: efficiency. Acceleration of GATE simulations can only be achieved through a combination of efficient data analysis, dedicated variance reduction techniques, fast navigation algorithms and parallelization. In the first part of this dissertation we consider the improvement of the analysis capabilities of GATE. The static analysis module in GATE is both inflexible and incapable of storing more detail without introducing a large computational overhead. However, the design and validation of the acceleration techniques in this dissertation requires a flexible, detailed and computationally efficient analysis module. To this end, we develop a new analysis framework capable of analyzing any process, from the decay of isotopes to particle interactions and detections in any detector element for any type of phantom. The evaluation of our framework consists of the assessment of spurious activity in 124I-Bexxar PET and of contamination in 131I-Bexxar SPECT. In the case of PET we describe how our framework can detect spurious coincidences generated by non-pure isotopes, even with realistic phantoms. We show that optimized energy thresholds, which can readily be applied in the clinic, can now be derived in order to minimize the contamination. We also show that the spurious activity itself is not spatially uniform. Therefore standard reconstruction and correction techniques are not adequate. In the case of SPECT we describe how it is now possible to classify detections into geometric detections, phantom scatter, penetration through the collimator, collimator scatter and backscatter in the end parts. We show that standard correction algorithms such as triple energy window correction cannot correct for septal penetration. We demonstrate that 124I PET with optimized energy thresholds offer better image quality than 131I SPECT when using standard reconstruction techniques. In the second part of this dissertation we focus on improving the efficiency of GATE with a variance reduction technique called Geometrical Importance Sampling (GIS). We describe how only 0.02% of all emitted photons can reach the crystal surface of a SPECT detector head with a low energy high resolution collimator. A lot of computing power is therefore wasted by tracking photons that will not contribute to the result. A twofold strategy is used to solve this problem: GIS employs Russian Roulette to discard those photons that will not likely contribute to the result. Photons in more important regions on the other hand are split into several photons with reduced weight to increase their survival chance. We show that this technique introduces branches into the particle history. We describe how this can be taken into account by a particle history tree that is used for the analysis of the results. The evaluation of GIS consists of energy spectra validation, spatial resolution and sensitivity for low and medium energy isotopes. We show that GIS reaches acceleration factors between 5 and 13 over analog GATE simulations for the isotopes in the study. It is a general acceleration technique that can be used for any isotope, phantom and detector combination. Although GIS is useful as a safe and accurate acceleration technique, it cannot deliver clinically acceptable simulation times. The main reason lies in its inability to force photons in a specific direction. In the third part of this dissertation we solve this problem for 99mTc SPECT simulations. Our approach is twofold. Firstly, we introduce two variance reduction techniques: forced detection (FD) and convolution-based forced detection (CFD) with multiple projection sampling (MPS). FD and CFD force copies of photons at decay and at every interaction point to be transported through the phantom in a direction sampled within a solid angle toward the SPECT detector head at all SPECT angles simultaneously. We describe how a weight must be assigned to each photon in order to compensate for the forced direction and non-absorption at emission and scatter. We show how the weights are calculated from the total and differential Compton and Rayleigh cross sections per electron with incorporation of Hubbell’s atomic form factor. In the case of FD all detector interactions are modeled by Monte Carlo, while in the case of CFD the detector is modeled analytically. Secondly, we describe the design of an FD and CFD specialized navigator to accelerate the slow tracking algorithms in GEANT4. The validation study shows that both FD and CFD closely match the analog GATE simulations and that we can obtain an acceleration factor between 3 (FD) and 6 (CFD) orders of magnitude over analog simulations. This allows for the simulation of a realistic acquisition with a torso phantom within 130 seconds. In the fourth part of this dissertation we exploit the intrinsic parallel nature of Monte Carlo simulations. We show how Monte Carlo simulations should scale linearly as a function of the number of processing nodes but that this is usually not achieved due to job setup time, output handling and cluster overhead. We describe how our approach is based on two steps: job distribution and output data handling. The job distribution is based on a time-domain partitioning scheme that retains all experimental parameters and that guarantees the statistical independence of each subsimulation. We also reduce the job setup time by the introduction of a parameterized collimator model for SPECT simulations. We reduce the data output handling time by a chain-based output merger. The scalability study is based on a set of simulations on a 70 CPU cluster and shows an acceleration factor of approximately 66 on 70 CPUs for both PET and SPECT.We also show that our method of parallelization does not introduce any approximations and that it can be readily combined with any of the previous acceleration techniques described above

    Integrated Circuits/Microchips

    Get PDF
    With the world marching inexorably towards the fourth industrial revolution (IR 4.0), one is now embracing lives with artificial intelligence (AI), the Internet of Things (IoTs), virtual reality (VR) and 5G technology. Wherever we are, whatever we are doing, there are electronic devices that we rely indispensably on. While some of these technologies, such as those fueled with smart, autonomous systems, are seemingly precocious; others have existed for quite a while. These devices range from simple home appliances, entertainment media to complex aeronautical instruments. Clearly, the daily lives of mankind today are interwoven seamlessly with electronics. Surprising as it may seem, the cornerstone that empowers these electronic devices is nothing more than a mere diminutive semiconductor cube block. More colloquially referred to as the Very-Large-Scale-Integration (VLSI) chip or an integrated circuit (IC) chip or simply a microchip, this semiconductor cube block, approximately the size of a grain of rice, is composed of millions to billions of transistors. The transistors are interconnected in such a way that allows electrical circuitries for certain applications to be realized. Some of these chips serve specific permanent applications and are known as Application Specific Integrated Circuits (ASICS); while, others are computing processors which could be programmed for diverse applications. The computer processor, together with its supporting hardware and user interfaces, is known as an embedded system.In this book, a variety of topics related to microchips are extensively illustrated. The topics encompass the physics of the microchip device, as well as its design methods and applications

    Optically Induced Nanostructures

    Get PDF
    Nanostructuring of materials is a task at the heart of many modern disciplines in mechanical engineering, as well as optics, electronics, and the life sciences. This book includes an introduction to the relevant nonlinear optical processes associated with very short laser pulses for the generation of structures far below the classical optical diffraction limit of about 200 nanometers as well as coverage of state-of-the-art technical and biomedical applications. These applications include silicon and glass wafer processing, production of nanowires, laser transfection and cell reprogramming, optical cleaning, surface treatments of implants, nanowires, 3D nanoprinting, STED lithography, friction modification, and integrated optics. The book highlights also the use of modern femtosecond laser microscopes and nanoscopes as novel nanoprocessing tools

    GSI Scientific Report 2007 [GSI Report 2008-1]

    Get PDF

    Annual report 2015

    Get PDF
    Accessions considered in the study. Overview of the material considered in this study. For all materials, the GenBank identifier, the accession and species name as used in this study (Species) as well as their species synonyms used in the donor seed banks or in the NCBI GenBank (Material source/Reference) are provided. The genome symbol, and the country of origin, where the material was originally collected are given. The ploidy level measured in the scope of this study and the information if a herbarium voucher could be deposited in the herbarium of IPK Gatersleben (GAT) is given. Genomic formulas of tetraploids and hexploids are given as “female x male parent”. The genomes of Aegilops taxa follow Kilian et al. [74] and Li et al. [84]. Genome denominations for Hordeum follow Blattner [107] and Bernhardt [12] for the remaining taxa. (XLS 84 kb

    Total internal reflection microscopy studies on colloidal particle endocytosis by living cells

    Get PDF
    The purpose of this study was to develop novel optical microscopy techniques in order to investigate colloidal drug particle endocytosis by mammalian cells. A total internal reflection microscope (TIRM) was initially developed for high resolution cellular imaging. TIRM is a non-fluorescent imaging technique based on the principle of ‘scattering’ of the evanescent field created when a light beam undergoes total internal reflection at an interface between two media with different refractive indices, such as glass and air. The key design considerations with respect to development of a TIRM instrument are discussed. The technique is also compared and contrasted to the more commonly known non-fluorescent RICM (Reflection Interference Contrast Microscopy) technique using computer simulations. Time-lapse video TIRM is applied to imaging the interaction between A549 and 3T3 cells, and a polylysine coated substrate. Real-time label-free visualisation of 0.5 and 1 m polystyrene particle endocytosis by living cells is then demonstrated. Modifications to the TIRM system to include a dual-colour fluorescent TIRF (Total Internal Reflection Fluorescence) microscope are described in detail. Results are shown which demonstrate the ability of a combined TIRM/TIRF instrument to selectively image the basal cell membrane both label-free and fluorescently. 3T3 fibroblast cells were genetically modified using standard molecular biology protocols to express the fluorescent fusion protein EGFP-Clathrin LCa (enhanced green fluorescent protein clathrin light chain a). Finally, colloidal particle endocytosis by the genetically modified cell was imaged using the TIRM/TIRF microscope. Direct visualisation of the internalisation of 500 nm particles via clathrin coated pits in 3T3 cells was shown for the first time
    corecore