1,196 research outputs found

    Direct Detection of Strongly Interacting Sub-GeV Dark Matter via Electron Recoils

    Full text link
    We consider direct-detection searches for sub-GeV dark matter via electron scatterings in the presence of large interactions between dark and ordinary matter. Scatterings both on electrons and nuclei in the Earth's crust, atmosphere, and shielding material attenuate the expected local dark matter flux at a terrestrial detector, so that such experiments lose sensitivity to dark matter above some critical cross section. We study various models, including dark matter interacting with a heavy and ultralight dark photon, through an electric dipole moment, and exclusively with electrons. For a dark-photon mediator and an electric dipole interaction, the dark matter-electron scattering cross-section is directly linked to the dark matter-nucleus cross section, and nuclear interactions typically dominate the attenuation process. We determine the exclusion bands for the different dark-matter models from several experiments - SENSEI, CDMS-HVeV, XENON10, XENON100, and DarkSide-50 - using a combination of Monte Carlo simulations and analytic estimates. We also derive projected sensitivities for a detector located at different depths and for a range of exposures, and calculate the projected sensitivity for SENSEI at SNOLAB and DAMIC-M at Modane. Finally, we discuss the reach to high cross sections and the modulation signature of a small balloon- and satellite-borne detector sensitive to electron recoils, such as a Skipper-CCD. Such a detector could potentially probe unconstrained parameter space at high cross sections for a sub-dominant component of dark matter interacting with a massive, but ultralight, dark photon.Comment: 40 pages, 12 figures. Code available at https://github.com/temken/DaMaSCUS-CRUST and https://doi.org/10.5281/zenodo.2846401 . v2: matches published versio

    Clinical Dosimetry in Photon Radiotherapy – a Monte Carlo Based Investigation

    Get PDF
    Die klinische Dosimetrie ist ein fundamentaler Schritt im Rahmen der Strahlentherapie und zielt auf eine Quantifizierung der absorbierten Energiedosis innerhalb einer Unsicherheit von 1-2%. Um eine entsprechende Genauigkeit zu erreichen, müssen Korrektionen bei Messungen mit luft-gefüllten, kalibrierten Ionisationskammern angewendet werden. Die Anwendung der Korrektionen basiert auf der Hohlraumtheorie nach Spencer-Attix und wird in den jeweiligen, aktuellen Dosimetrieprotokollen definiert. Energieabhängige Korrektionen berücksichtigen die Abweichung von Kalibrierbedingungen und die damit verbundene Änderung des Ansprechvermögens von Ionisationskammern im Therapiestrahl. Die üblicherweise angewendeten Korrektionen basieren auf semi-analytischen Modellen oder auf Vergleichsmessungen und sind auf Grund der Größenordnung von einigen Prozent oder weniger schwierig zu quantifizieren. Weiterhin werden die Korrektionen für feste geometrische Referenzbedingungen definiert, die nicht zwangsläufig mit den Bedingungen in den modernen Strahlentherapie-Anwendungen übereinstimmen. Das stochastische Monte-Carlo Verfahren zur Simulation von Strahlungstransport gewinnt zunehmend Bedeutung in der Medizinischen Physik. Es stellt ein geeignetes Werkzeug zur Berechnung dieser Korrektionen mit einer prinzipiell hohen Genauigkeit dar und erlaubt die Untersuchung von Ionisationskammern unter verschiedensten Bedingungen. Ziel der vorliegenden Arbeit ist die konsistente Untersuchung der gängigen Ionisationskammer-Dosimetrie in der Strahlentherapie mit Photonen unter Anwendung von Monte-Carlo Simulationen. Heutzutage existieren Monte-Carlo Algorithmen, die die präzise Berechnung des Ansprechvermögens von Ionisationskammern prinzipiell erlauben. Dem Ergebnis einer Monte Carlo Simulation haftet allerdings immer eine statistische Unsicherheit an. Untersuchungen dieser Art sind damit durch lange Berechnungszeiten, die für ein signifikantes Ergebnis innerhalb kleiner statistischen Unsicherheiten entstehen, nur begrenzt möglich. Neben der Verwendung großer Rechnerkapazitäten, lassen sich so genannte Varianzreduktions-Verfahren anwenden, die die benötigte Simulationszeit verringern. Entsprechende Methoden zur Steigerung der Recheneffizienz um mehrere Größenordnungen wurden im Rahmen der Arbeit entwickelt und in ein modernes und etabliertes Monte-Carlo Simulationspaket implementiert. Mit Hilfe der entwickelten Methoden wurden Daten aktueller klinischer Dosimetrieprotokolle zur Bestimmung der Wasserenergiedosis unter Referenzbedingungen in Photonenstrahlung untersucht. Korrektionsfaktoren wurden berechnet und mit den existierenden Daten in der Literatur verglichen. Es konnte gezeigt werden, dass berechnete Daten in guter Übereinstimmung mit aktuellen Messdaten liegen, allerdings teilweise von den in Dosimetrieprotokollen genutzten Daten um _1% abweichen. Ursache hierfür sind z.T. überholte Theorien und jahrzehnte alte Messungen zu einzelnen Störungsfaktoren. Quellen von Unsicherheiten in den durch Monte-Carlo Simulationen berechneten Daten wurden untersucht, auch unter Berücksichtigung von Unsicherheiten in den Wirkungsquerschnitten, die den Simulationen zu Grunde liegen. Im Sinne einer konservativen Abschätzung zeigten sich dabei systematische (Typ B) Unsicherheiten von ~1%. Ionisationskammern unter Nicht-Referenzbedingungen wurden mit Hilfe eines virtuellen Linearbeschleuniger-Modells untersucht. Neben der Entwicklung einer Methodik zur Kommissionierung, d.h. dem Anpassen des Modells an Messungen hinsichtlich der Eigenschaften des primären Elektronenstrahls, war das Ziel dieser Berechnungen eine Untersuchung des Verhaltens von Ionisationskammern unter geometrischen Nicht-Referenzbedingungen. Es konnte gezeigt werden, dass die üblicherweise eingesetzten Ionisationskammertypen nur kleine Abweichungen in ihrem Ansprechvermögen zeigen, solange Sekundärelektronen-Gleichgewicht vorausgesetzt werden kann. Demgegenüber zeigen Detektoren eine starke Änderung ihres Ansprechvermögens in Regionen, in denen kein Sekundärelektronen-Gleichgewicht und damit ein hoher Dosisgradient vorliegt, wie etwa im Feldrand. Die Anwendbarkeit der Spencer-Attix Theorie unter diesen Bedingungen wurde überprüft und es konnte gezeigt werden, dass innerhalb von ~1% die Bestimmung der Wasserenergiedosis mit Hilfe der Korrektionsfaktoren möglich ist. Eine weitere Untersuchung dieser Bedingungen bei der Messung von Profilen wurde genutzt, um einen Detektortyp zu bestimmen, der die geringsten Abweichungen in seinem Ansprechvermögen in Regionen mit Sekundärelektronen-Ungleichgewicht und hohen Dosisgradienten zeigt. Hinsichtlich der Verbreiterung des Feldrands zeigt die Filmdosimetrie die geringsten Abweichungen zu einem idealen Profil. Langfristig werden Monte-Carlo Simulationen die Daten in klinischen Dosimetrieprotokollen ersetzen oder zumindest erweitern, um eine Verringerung der Unsicherheiten bei der Strahlenanwendung am Menschen zu erreichen. Für Korrektionen in Nicht-Referenzbedingungen wie sie in modernen strahlentherapeutischen Anwendungen auftreten, werden Monte-Carlo Simulationen eine entscheidende Rolle spielen. Die in dieser Arbeit entwickelten Methoden stellen dementsprechend einen wichtigen Schritt zur Verringerung der Unsicherheiten in der Strahlentherapie dar

    Acceleration of GATE Monte Carlo simulations

    Get PDF
    Positron Emission Tomography (PET) and Single Photon Emission Computed Tomography are forms of medical imaging that produce functional images that reflect biological processes. They are based on the tracer principle. A biologically active substance, a pharmaceutical, is selected so that its spatial and temporal distribution in the body reflects a certain body function or metabolism. In order to form images of the distribution, the pharmaceutical is labeled with gamma-ray-emitting or positron-emitting radionuclides (radiopharmaceuticals or tracers). After administration of the tracer to a patient, an external position-sensitive gamma-ray camera can detect the emitted radiation to form a stack of images of the radionuclide distribution after a reconstruction process. Monte Carlo methods are numerical methods that use random numbers to compute quantities of interest. This is normally done by creating a random variable whose expected value is the desired quantity. One then simulates and tabulates the random variable and uses its sample mean and variance to construct probabilistic estimates. It represents an attempt to model nature through direct simulation of the essential dynamics of the system in question. Monte Carlo modeling is the method of choice for all applications where measurements are not feasible or where analytic models are not available due to the complex nature of the problem. In addition, such modeling is a practical approach in nuclear medical imaging in several important application fields: detector design, quantification, correction methods for image degradations, detection tasks etc. Several powerful dedicated Monte Carlo simulators for PET and/or SPECT are available. However, they are often not detailed nor flexible enough to enable realistic simulations of emission tomography detector geometries while also modeling time dependent processes such as decay, tracer kinetics, patient and bed motion, dead time or detector orbits. Our Monte Carlo simulator of choice, GEANT4 Application for Tomographic Emission (GATE), was specifically designed to address all these issues. The flexibility of GATE comes at a price however. The simulation of a simple prototype SPECT detector may be feasible within hours in GATE but an acquisition with a realistic phantom may take years to complete on a single CPU. In this dissertation we therefore focus on the Achilles’ heel of GATE: efficiency. Acceleration of GATE simulations can only be achieved through a combination of efficient data analysis, dedicated variance reduction techniques, fast navigation algorithms and parallelization. In the first part of this dissertation we consider the improvement of the analysis capabilities of GATE. The static analysis module in GATE is both inflexible and incapable of storing more detail without introducing a large computational overhead. However, the design and validation of the acceleration techniques in this dissertation requires a flexible, detailed and computationally efficient analysis module. To this end, we develop a new analysis framework capable of analyzing any process, from the decay of isotopes to particle interactions and detections in any detector element for any type of phantom. The evaluation of our framework consists of the assessment of spurious activity in 124I-Bexxar PET and of contamination in 131I-Bexxar SPECT. In the case of PET we describe how our framework can detect spurious coincidences generated by non-pure isotopes, even with realistic phantoms. We show that optimized energy thresholds, which can readily be applied in the clinic, can now be derived in order to minimize the contamination. We also show that the spurious activity itself is not spatially uniform. Therefore standard reconstruction and correction techniques are not adequate. In the case of SPECT we describe how it is now possible to classify detections into geometric detections, phantom scatter, penetration through the collimator, collimator scatter and backscatter in the end parts. We show that standard correction algorithms such as triple energy window correction cannot correct for septal penetration. We demonstrate that 124I PET with optimized energy thresholds offer better image quality than 131I SPECT when using standard reconstruction techniques. In the second part of this dissertation we focus on improving the efficiency of GATE with a variance reduction technique called Geometrical Importance Sampling (GIS). We describe how only 0.02% of all emitted photons can reach the crystal surface of a SPECT detector head with a low energy high resolution collimator. A lot of computing power is therefore wasted by tracking photons that will not contribute to the result. A twofold strategy is used to solve this problem: GIS employs Russian Roulette to discard those photons that will not likely contribute to the result. Photons in more important regions on the other hand are split into several photons with reduced weight to increase their survival chance. We show that this technique introduces branches into the particle history. We describe how this can be taken into account by a particle history tree that is used for the analysis of the results. The evaluation of GIS consists of energy spectra validation, spatial resolution and sensitivity for low and medium energy isotopes. We show that GIS reaches acceleration factors between 5 and 13 over analog GATE simulations for the isotopes in the study. It is a general acceleration technique that can be used for any isotope, phantom and detector combination. Although GIS is useful as a safe and accurate acceleration technique, it cannot deliver clinically acceptable simulation times. The main reason lies in its inability to force photons in a specific direction. In the third part of this dissertation we solve this problem for 99mTc SPECT simulations. Our approach is twofold. Firstly, we introduce two variance reduction techniques: forced detection (FD) and convolution-based forced detection (CFD) with multiple projection sampling (MPS). FD and CFD force copies of photons at decay and at every interaction point to be transported through the phantom in a direction sampled within a solid angle toward the SPECT detector head at all SPECT angles simultaneously. We describe how a weight must be assigned to each photon in order to compensate for the forced direction and non-absorption at emission and scatter. We show how the weights are calculated from the total and differential Compton and Rayleigh cross sections per electron with incorporation of Hubbell’s atomic form factor. In the case of FD all detector interactions are modeled by Monte Carlo, while in the case of CFD the detector is modeled analytically. Secondly, we describe the design of an FD and CFD specialized navigator to accelerate the slow tracking algorithms in GEANT4. The validation study shows that both FD and CFD closely match the analog GATE simulations and that we can obtain an acceleration factor between 3 (FD) and 6 (CFD) orders of magnitude over analog simulations. This allows for the simulation of a realistic acquisition with a torso phantom within 130 seconds. In the fourth part of this dissertation we exploit the intrinsic parallel nature of Monte Carlo simulations. We show how Monte Carlo simulations should scale linearly as a function of the number of processing nodes but that this is usually not achieved due to job setup time, output handling and cluster overhead. We describe how our approach is based on two steps: job distribution and output data handling. The job distribution is based on a time-domain partitioning scheme that retains all experimental parameters and that guarantees the statistical independence of each subsimulation. We also reduce the job setup time by the introduction of a parameterized collimator model for SPECT simulations. We reduce the data output handling time by a chain-based output merger. The scalability study is based on a set of simulations on a 70 CPU cluster and shows an acceleration factor of approximately 66 on 70 CPUs for both PET and SPECT.We also show that our method of parallelization does not introduce any approximations and that it can be readily combined with any of the previous acceleration techniques described above

    Automation of the Monte Carlo simulation of medical linear accelerators

    Get PDF
    La consulta íntegra de la tesi, inclosos els articles no comunicats públicament per drets d'autor, es pot realitzar prèvia petició a l'Arxiu de la UPCThe main result of this thesis is a software system, called PRIMO, which simulates clinical linear accelerators and the subsequent dose distributions using the Monte Carlo method. PRIMO has the following features: (i) it is self- contained, that is, it does not require additional software libraries or coding; (ii) it includes a geometry library with most Varian and Elekta linacs; (iii) it is based on the general-purpose Monte Carlo code PENELOPE; (iv) it provides a suite of variance-reduction techniques and distributed parallel computing to enhance the simulation efficiency; (v) it is graphical user interfaced; and (vi) it is freely distributed through the website http://www.primoproject.net In order to endow PRIMO with these features the following tasks were conducted: - PRIMO was conceived with a layered structure. The topmost layer, named the GLASS, was developed in this thesis. The GLASS implements the GUI, drives all the functions of the system and performs the analysis of results. Lower layers generate geometry files, provide input data and execute the Monte Carlo simulation. - The geometry of Elekta linacs from series SU and MLCi were coded in the PRIMO system. - A geometrical model of the Varian True Beam linear accelerator was developed and validated. This model was created to surmount the limitations of the Varian distributed phase-space files and the absence of released information about the actual geometry of that machine. This geometry model was incorporated into PRIMO. - Two new variance-reduction techniques, named splitting roulette and selective splitting, were developed and validated. In a test made with an Elekta linac it was found that when both techniques are used in conjunction the simulation efficiency improves by a factor of up to 45. - A method to automatically distribute the simulation among the available CPU cores of a computer was implemented. The following investigations were done using PRIMO as a research tool : - The configu ration of the condensed history transport algorithm for charged particles in PENELOPE was optimized for linac simulation. Dose distributions in the patient were found to be particularly sensitive to the values of the transport parameters in the linac target. Use of inadequate values of these parameters may lead to an incorrect determination of the initial beam configuration or to biased dose distributions. - PRIMO was used to simulate phase-space files distributed by Varian for the True Beam linac. The results were compared with experimental data provided by five European radiotherapycenters. It was concluded thatthe latent variance and the accuracy of the phase-space files were adequate for the routine clinical practice. However, for research purposes where low statistical uncertainties are required the phase-space files are not large enough. To the best of our knowledge PRIMO is the only fully Monte Carlo-based linac and dose simulation system , addressed to research and dose verification, that does not require coding tasks from end users and is publicly available.El principal resultado de esta tesis es un sistema informático llamado PRIMO el cual simula aceleradores lineales médicos y las subsecuentes distribuciones de dosis empleando el método de Monte Carlo. PRIMO tiene las siguiente características: (i) es auto contenido, o sea no requiere de librerías de código ni de programación adicional ; (ii) incluye las geometrías de los principales modelos de aceleradores Varían y Elekta; (iii) está basado en el código Monte Carlo de propósitos generales PENELOPE; (iv) contiene un conjunto de técnicas de reducción de varianza y computación paralela distribuida para mejorar la eficiencia de simulación; (v) tiene una interfaz gráfica de usuario; y (vi) se distribuye gratis en el sitio web http://vvww.primoproject.net. Para dotar a PRIMO de esas características, se realizaron las tareas siguientes: - PRIMO se concibió con una estructura de capas. La capa superior, nombrada GLASS, fue desarrollada en esta tesis. GLASS implementa la interfazgráfica de usuario, controla todas las funciones del sistema y realiza el análisis de resultados. Las capas inferiores generan los archivos de geometría y otros datos de entrada y ejecutan la simulación Monte Carlo. - Se codificó en el sistema PRIMO la geometría de los aceleradores Elekta de las series SLi y MLC. - Se desarrolló y validó un modelo geométrico del acelerador TrueBeam de Varian. Este modelo fue creado para superar las limitaciones de los archivos de espacio de fase distribuidos por Varian, así como la ausencia de información sobre la geometría real de esta máquina. Este modelo geométrico fue incorporado en PRIMO. - Fueron desarrolladas y validadas dos nuevas técnicas de reducción de varianza nombradas splitting roulette y selective splitting. En pruebas hechas en un acelerador Elekta se encontró que cuando ambas técnicas se usan en combinación, la eficiencia de simulación mejora 45 veces. - Se implementó un método para distribuir la simulación entre los procesadores disponibles en un ordenador. Las siguientes investigaciones fueron realizadas usando PRIMO como herramienta: - Fue optimizada la configuración del algoritmo de PENELOPE para el transporte de partículas cargadas con historia condensada en la simulación del linac. Se encontró que las distribuciones de dosis en el paciente son particularmente sensibles a los valores de los parámetros de transporte usados para el target del linac. El uso de va lores inadecuados para esos parámetros puede conducir a una incorrecta determinación de la configuración del haz inicial o producir sesgos en las distribuciones de dosis. - Se utilizó PRIMO para simular archivos de espacios de fase distribuidos por Varian para el linac TrueBeam. Los resultados se compararon con datos experimentales aportados por cinco centros de radioterapia europeos. Se concluyó que la varianza latente y la exactitud de estos espacios de fase son adecuadas para la práctica clínica de rutina. Sin embargo estos espacios de fase no son suficientemente grandes para emplearse en investigaciones que requieren alcanzar una baja incertidumbre estadística. Hasta donde conocemos, PRIMO es el único sistema Monte Carlo que simula completamente el acelerador lineal y calcula la dosis absorbida, dirigido a la investigación y la verificación de dosis que no requiere del usuario tareas de codificación y está disponible públicamentePostprint (published version

    Development of radiation transport techniques for modelling a high-resolution multi-energy photon emission tomography system

    Get PDF
    ”Nondestructive characterization techniques such as gamma tomography represent powerful tools for the analysis and quantification of physical defects and radionuclide concentrations within nuclear fuel forms. Gamma emission tomography, in particular, has the ability to utilize the inherent radiation within spent nuclear fuel to provide users with information about the migration and concentration of fission and activation products within the fuel form. Idaho National Laboratory is interested in using this technology to analyze new nuclear fuel forms for potential use in next generation nuclear reactors. In this work, two aspect of the system are analyzed. The first is a semi-analytic radiation transport methodology in conjunction with a parallel beam collimator developed to facilitate the acquisition of data from Monte-Carlo modeling of a small submersible gamma tomography system, with a focus on emission information. The second is a pinhole collimator designed to optimize count rates, diameter, and acceptance angle to increase the sampling of the fuel forms to decrease data acquisition time. Utilizing the semi-analytical technique, computational savings of 107-1011 can be achieved with a degradation in accuracy of 1845% compared to a standard isotropic uniform Monte-Carlo N Particle transport simulation. However, this loss in accuracy can be minimized by increasing the parallel beam collimator’s aspect ratio where it tends towards a degenerate cylinder. The semianalytic technique is also compared to inbuilt acceleration techniques. The pinhole collimator design yields count rates on the order of 100s-1000s which represents a 101-102 increase in actual count rates over the entirety of the photon spectrum”--Abstract, page iv

    Development of Monte Carlo methods for shielding applications

    Get PDF
    Imperial Users onl

    Hybrid Intelligent Optimization Methods for Engineering Problems

    Get PDF
    The purpose of optimization is to obtain the best solution under certain conditions. There are numerous optimization methods because different problems need different solution methodologies; therefore, it is difficult to construct patterns. Also mathematical modeling of a natural phenomenon is almost based on differentials. Differential equations are constructed with relative increments among the factors related to yield. Therefore, the gradients of these increments are essential to search the yield space. However, the landscape of yield is not a simple one and mostly multi-modal. Another issue is differentiability. Engineering design problems are usually nonlinear and they sometimes exhibit discontinuous derivatives for the objective and constraint functions. Due to these difficulties, non-gradient-based algorithms have become more popular in recent decades. Genetic algorithms (GA) and particle swarm optimization (PSO) algorithms are popular, non-gradient based algorithms. Both are population-based search algorithms and have multiple points for initiation. A significant difference from a gradient-based method is the nature of the search methodologies. For example, randomness is essential for the search in GA or PSO. Hence, they are also called stochastic optimization methods. These algorithms are simple, robust, and have high fidelity. However, they suffer from similar defects, such as, premature convergence, less accuracy, or large computational time. The premature convergence is sometimes inevitable due to the lack of diversity. As the generations of particles or individuals in the population evolve, they may lose their diversity and become similar to each other. To overcome this issue, we studied the diversity concept in GA and PSO algorithms. Diversity is essential for a healthy search, and mutations are the basic operators to provide the necessary variety within a population. After having a close scrutiny of the diversity concept based on qualification and quantification studies, we improved new mutation strategies and operators to provide beneficial diversity within the population. We called this new approach as multi-frequency vibrational GA or PSO. They were applied to different aeronautical engineering problems in order to study the efficiency of these new approaches. These implementations were: applications to selected benchmark test functions, inverse design of two-dimensional (2D) airfoil in subsonic flow, optimization of 2D airfoil in transonic flow, path planning problems of autonomous unmanned aerial vehicle (UAV) over a 3D terrain environment, 3D radar cross section minimization problem for a 3D air vehicle, and active flow control over a 2D airfoil. As demonstrated by these test cases, we observed that new algorithms outperform the current popular algorithms. The principal role of this multi-frequency approach was to determine which individuals or particles should be mutated, when they should be mutated, and which ones should be merged into the population. The new mutation operators, when combined with a mutation strategy and an artificial intelligent method, such as, neural networks or fuzzy logic process, they provided local and global diversities during the reproduction phases of the generations. Additionally, the new approach also introduced random and controlled diversity. Due to still being population-based techniques, these methods were as robust as the plain GA or PSO algorithms. Based on the results obtained, it was concluded that the variants of the present multi-frequency vibrational GA and PSO were efficient algorithms, since they successfully avoided all local optima within relatively short optimization cycles

    Monte Carlo Treatment Planning for Advanced Radiotherapy

    Get PDF
    corecore