103 research outputs found

    Title index to volume 29

    Get PDF

    Design Space Exploration and Resource Management of Multi/Many-Core Systems

    Get PDF
    The increasing demand of processing a higher number of applications and related data on computing platforms has resulted in reliance on multi-/many-core chips as they facilitate parallel processing. However, there is a desire for these platforms to be energy-efficient and reliable, and they need to perform secure computations for the interest of the whole community. This book provides perspectives on the aforementioned aspects from leading researchers in terms of state-of-the-art contributions and upcoming trends

    The Threat of Offensive AI to Organizations

    Get PDF
    AI has provided us with the ability to automate tasks, extract information from vast amounts of data, and synthesize media that is nearly indistinguishable from the real thing. However, positive tools can also be used for negative purposes. In particular, cyber adversaries can use AI to enhance their attacks and expand their campaigns. Although offensive AI has been discussed in the past, there is a need to analyze and understand the threat in the context of organizations. For example, how does an AI-capable adversary impact the cyber kill chain? Does AI benefit the attacker more than the defender? What are the most significant AI threats facing organizations today and what will be their impact on the future? In this study, we explore the threat of offensive AI on organizations. First, we present the background and discuss how AI changes the adversary’s methods, strategies, goals, and overall attack model. Then, through a literature review, we identify 32 offensive AI capabilities which adversaries can use to enhance their attacks. Finally, through a panel survey spanning industry, government and academia, we rank the AI threats and provide insights on the adversaries

    A Differential Study of Nucleosynthesis in Open Star Clusters

    Get PDF
    Ph.D. Thesis. University of Hawaiʻi at Mānoa 2018

    From cosmic voids to collapsed structures: HPC methods for Astrophysics and Cosmology

    Get PDF
    Computational methods, software development and High Performance Computing awareness are of ever-growing importance in Astrophysics and Cosmology. In this context, the additional challenge comes from the impossibility of reproducing experiments in the controlled environment of a laboratory, making simulations unavoidable for testing theoretical models. In this work I present a quite heterogeneous ensemble of projects we have performed in the context of simulations of the large scale structure of the Universe. The connection being the development and usage of original computational tools for the analysis and post-processing of simulated data. In the first part of this manuscript I report on the efforts to develop a consistent theory for the size function of cosmic voids detected in biased tracers of the density field. Upcoming large scale surveys will map the distribution of galaxies with unprecedented detail and up to depths never reached before. Thanks to these large datasets, the void size function is expected to become a powerful statistics to infer the geometrical properties of space-time. In spite of this, the existing theoretical models are not capable of describing correctly the distribution of voids detected, neither in unbiased nor in biased simulated tracers. We have improved the void selection procedure, by developing an algorithm that redefines the void ridges and, consequently, their radii. By applying this algorithm, we validate the volume conserving model of the void size function on a set of unbiased simulated density field tracers. We highlight the difference in the internal structure between voids selected in this way and those identified by the popular VIDE void finder. We also extend the validation of the model to the case of biased tracers. We find that a relation exists between the tracer used to sample the underlying dark matter density field and its unbiased counterpart. Moreover, we demonstrate that, as long as this relation is accounted for, the size function is a viable approach for studying cosmology with voids. Finally, by parameterising the size function in terms of the linear effective bias of tracers, we perform an additional step towards analysing cosmic voids in real surveys. The proposed size function model has been accurately calibrated on halo catalogues, and used to validate the possibility to provide forecasts on the cosmological constraints, namely on the matter density parameter, OmegaMOmega_M, and on the normalisation of the linear matter power spectrum, sigma8sigma_8. oindent The second part of the manuscript is focused in presenting the hybrid C++/python implementation of ScamPy, our empirical framework for ``painting'' galaxies on top of the Dark Matter Halo/Sub-Halo hierarchy obtained from N-body simulations. Our confidence on the reliability of N-body Dark Matter-only simulations stands on the argument that the evolution of the non-collisional matter component only depends on the effect of gravity and on the initial conditions. The formation and evolution of the luminous component (i.e. galaxies and intergalactic baryonic matter) are far from being understood at the same level as the dark matter. Among the possible approaches for modelling the luminous component, empirical methods are designed to reproduce observable properties of a target (observed) population of objects at a given moment of their evolution. With respect to ab initio approaches (i.e. hydrodynamical N-body simulations and semi-analytical models), empirical methods are typically cheaper in terms of computational power and are by design more reliable in the high redshift regime. Building an empirical model of galaxy occupation requires to define the hosted-object/hosting-halo connection for associating to the underlying DM distribution its baryonic counterpart. The method we use is based on the sub-halo clustering and abundance matching (SCAM) scheme which requires observations of the 1- and 2-point statistics of the target population we want to reproduce. This method is particularly tailored for high redshift studies and thereby relies on the observed high-redshift galaxy luminosity functions and correlation properties. The core functionalities of ScamPy are written in C++ and exploit Object Oriented Programming, with a wide use of polymorphism, to achieve flexibility and high computational efficiency. In order to have an easily accessible interface, all the libraries are wrapped in python and provided with an extensive documentation. I present the theoretical background of the method and provide a detailed description of the implemented algorithms. We have validated the key components of the framework, demonstrating it produces scientifically meaningful results with satisfying performances. Finally, we have tested the framework in a proof-of-concept application at high-redshift. Namely, we paint a mock galaxy population on top of high resolution dark matter only simulation, mimicking the luminosity and clustering properties of high-redshift Lyman Break Galaxies retrieved from recent literature. We use these mock galaxies to infer the ionizing radiation spatial and statistical distribution during the period of Reionization

    Multi-criteria optimization algorithms for high dose rate brachytherapy

    Get PDF
    L’objectif gĂ©nĂ©ral de cette thĂšse est d’utiliser les connaissances en physique de la radiation, en programmation informatique et en Ă©quipement informatique Ă  la haute pointe de la technologie pour amĂ©liorer les traitements du cancer. En particulier, l’élaboration d’un plan de traitement en radiothĂ©rapie peut ĂȘtre complexe et dĂ©pendant de l’utilisateur. Cette thĂšse a pour objectif de simplifier la planification de traitement actuelle en curiethĂ©rapie de la prostate Ă  haut dĂ©bit de dose (HDR). Ce projet a dĂ©butĂ© Ă  partir d’un algorithme de planification inverse largement utilisĂ©, la planification de traitement inverse par recuit simulĂ© (IPSA). Pour aboutir Ă  un algorithme de planification inverse ultra-rapide et automatisĂ©, trois algorithmes d’optimisation multicritĂšres (MCO) ont Ă©tĂ© mis en oeuvre. Suite Ă  la gĂ©nĂ©ration d’une banque de plans de traitement ayant divers compromis avec les algorithmes MCO, un plan de qualitĂ© a Ă©tĂ© automatiquement sĂ©lectionnĂ©. Dans la premiĂšre Ă©tude, un algorithme MCO a Ă©tĂ© introduit pour explorer les frontiĂšres de Pareto en curiethĂ©rapie HDR. L’algorithme s’inspire de la fonctionnalitĂ© MCO intĂ©grĂ©e au systĂšme Raystation (RaySearch Laboratories, Stockholm, SuĂšde). Pour chaque cas, 300 plans de traitement ont Ă©tĂ© gĂ©nĂ©rĂ©s en sĂ©rie pour obtenir une approximation uniforme de la frontiĂšre de Pareto. Chaque plan optimal de Pareto a Ă©tĂ© calculĂ© avec IPSA et chaque nouveau plan a Ă©tĂ© ajoutĂ© Ă  la portion de la frontiĂšre de Pareto oĂč la distance entre sa limite supĂ©rieure et sa limite infĂ©rieure Ă©tait la plus grande. Dans une Ă©tude complĂ©mentaire, ou dans la seconde Ă©tude, un algorithme MCO basĂ© sur la connaissance (kMCO) a Ă©tĂ© mis en oeuvre pour rĂ©duire le temps de calcul de l’algorithme MCO. Pour ce faire, deux stratĂ©gies ont Ă©tĂ© mises en oeuvre : une prĂ©diction de l’espace des solutions cliniquement acceptables Ă  partir de modĂšles de rĂ©gression et d’un calcul parallĂšle des plans de traitement avec deux processeurs Ă  six coeurs. En consĂ©quence, une banque de plans de traitement de petite taille (14) a Ă©tĂ© gĂ©nĂ©rĂ©e et un plan a Ă©tĂ© sĂ©lectionnĂ© en tant que plan kMCO. L’efficacitĂ© de la planification et de la performance dosimĂ©trique ont Ă©tĂ© comparĂ©es entre les plans approuvĂ©s par le mĂ©decin et les plans kMCO pour 236 cas. La troisiĂšme et derniĂšre Ă©tude de cette thĂšse a Ă©tĂ© rĂ©alisĂ©e en coopĂ©ration avec CĂ©dric BĂ©langer. Un algorithme MCO (gMCO) basĂ© sur l’utilisation d’un environnement de dĂ©veloppement compatible avec les cartes graphiques a Ă©tĂ© mis en oeuvre pour accĂ©lĂ©rer davantage le calcul. De plus, un algorithme d’optimisation quasi-Newton a Ă©tĂ© implĂ©mentĂ© pour remplacer le recuit simulĂ© dans la premiĂšre et la deuxiĂšme Ă©tude. De cette maniĂšre, un millier de plans de traitement avec divers compromis et Ă©quivalents Ă  ceux gĂ©nĂ©rĂ©s par IPSA ont Ă©tĂ© calculĂ©s en parallĂšle. Parmi la banque de plans de traitement gĂ©nĂ©rĂ© par l’agorithme gMCO, un plan a Ă©tĂ© sĂ©lectionnĂ© (plan gMCO). Le temps de planification et les rĂ©sultats dosimĂ©triques ont Ă©tĂ© comparĂ©s entre les plans approuvĂ©s par le mĂ©decin et les plans gMCO pour 457 cas. Une comparaison Ă  grande Ă©chelle avec les plans approuvĂ©s par les radio-oncologues montre que notre dernier algorithme MCO (gMCO) peut amĂ©liorer l’efficacitĂ© de la planification du traitement (de quelques minutes Ă  9:4 s) ainsi que la qualitĂ© dosimĂ©trique des plans de traitements (des plans passant de 92:6% Ă  99:8% selon les critĂšres dosimĂ©triques du groupe de traitement oncologique par radiation (RTOG)). Avec trois algorithmes MCO mis en oeuvre, cette thĂšse reprĂ©sente un effort soutenu pour dĂ©velopper un algorithme de planification inverse ultra-rapide, automatique et robuste en curiethĂ©rapie HDR.The overall purpose of this thesis is to use the knowledge of radiation physics, computer programming and computing hardware to improve cancer treatments. In particular, designing a treatment plan in radiation therapy can be complex and user-dependent, and this thesis aims to simplify current treatment planning in high dose rate (HDR) prostate brachytherapy. This project was started from a widely used inverse planning algorithm, Inverse Planning Simulated Annealing (IPSA). In order to eventually lead to an ultra-fast and automatic inverse planning algorithm, three multi-criteria optimization (MCO) algorithms were implemented. With MCO algorithms, a desirable plan was selected after computing a set of treatment plans with various trade-offs. In the first study, an MCO algorithm was introduced to explore the Pareto surfaces in HDR brachytherapy. The algorithm was inspired by the MCO feature integrated in the Raystation system (RaySearch Laboratories, Stockholm, Sweden). For each case, 300 treatment plans were serially generated to obtain a uniform approximation of the Pareto surface. Each Pareto optimal plan was computed with IPSA, and each new plan was added to the Pareto surface portion where the distance between its upper boundary and its lower boundary was the largest. In a companion study, or the second study, a knowledge-based MCO (kMCO) algorithm was implemented to shorten the computation time of the MCO algorithm. To achieve this, two strategies were implemented: a prediction of clinical relevant solution space with previous knowledge, and a parallel computation of treatment plans with two six-core CPUs. As a result, a small size (14) plan dataset was created, and one plan was selected as the kMCO plan. The planning efficiency and the dosimetric performance were compared between the physician-approved plans and the kMCO plans for 236 cases. The third and final study of this thesis was conducted in cooperation with CĂ©dric BĂ©langer. A graphics processing units (GPU) based MCO (gMCO) algorithm was implemented to further speed up the computation. Furthermore, a quasi-Newton optimization engine was implemented to replace simulated annealing in the first and the second study. In this way, one thousand IPSA equivalent treatment plans with various trade-offs were computed in parallel. One plan was selected as the gMCO plan from the calculated plan dataset. The planning time and the dosimetric results were compared between the physician-approved plans and the gMCO plans for 457 cases. A large-scale comparison against the physician-approved plans shows that our latest MCO algorithm (gMCO) can result in an improved treatment planning efficiency (from minutes to 9:4 s) as well as an improved treatment plan dosimetric quality (Radiation Therapy Oncology Group (RTOG) acceptance rate from 92.6% to 99.8%). With three implemented MCO algorithms, this thesis represents a sustained effort to develop an ultra-fast, automatic and robust inverse planning algorithm in HDR brachytherapy

    Fast Monte Carlo Simulations for Quality Assurance in Radiation Therapy

    Get PDF
    Monte Carlo (MC) simulation is generally considered to be the most accurate method for dose calculation in radiation therapy. However, it suffers from the low simulation efficiency (hours to days) and complex configuration, which impede its applications in clinical studies. The recent rise of MRI-guided radiation platform (e.g. ViewRay’s MRIdian system) brings urgent need of fast MC algorithms because the introduced strong magnetic field may cause big errors to other algorithms. My dissertation focuses on resolving the conflict between accuracy and efficiency of MC simulations through 4 different approaches: (1) GPU parallel computation, (2) Transport mechanism simplification, (3) Variance reduction, (4) DVH constraint. Accordingly, we took several steps to thoroughly study the performance and accuracy influence of these methods. As a result, three Monte Carlo simulation packages named gPENELOPE, gDPMvr and gDVH were developed for subtle balance between performance and accuracy in different application scenarios. For example, the most accurate gPENELOPE is usually used as golden standard for radiation meter model, while the fastest gDVH is usually used for quick in-patient dose calculation, which significantly reduces the calculation time from 5 hours to 1.2 minutes (250 times faster) with only 1% error introduced. In addition, a cross-platform GUI integrating simulation kernels and 3D visualization was developed to make the toolkit more user-friendly. After the fast MC infrastructure was established, we successfully applied it to four radiotherapy scenarios: (1) Validate the vender provided Co60 radiation head model by comparing the dose calculated by gPENELOPE to experiment data; (2) Quantitatively study the effect of magnetic field to dose distribution and proposed a strategy to improve treatment planning efficiency; (3) Evaluate the accuracy of the build-in MC algorithm of MRIdian’s treatment planning system. (4) Perform quick quality assurance (QA) for the “online adaptive radiation therapy” that doesn’t permit enough time to perform experiment QA. Many other time-sensitive applications (e.g. motional dose accumulation) will also benefit a lot from our fast MC infrastructure

    Quantum Computing for High-Energy Physics: State of the Art and Challenges. Summary of the QC4HEP Working Group

    Full text link
    Quantum computers offer an intriguing path for a paradigmatic change of computing in the natural sciences and beyond, with the potential for achieving a so-called quantum advantage, namely a significant (in some cases exponential) speed-up of numerical simulations. The rapid development of hardware devices with various realizations of qubits enables the execution of small scale but representative applications on quantum computers. In particular, the high-energy physics community plays a pivotal role in accessing the power of quantum computing, since the field is a driving source for challenging computational problems. This concerns, on the theoretical side, the exploration of models which are very hard or even impossible to address with classical techniques and, on the experimental side, the enormous data challenge of newly emerging experiments, such as the upgrade of the Large Hadron Collider. In this roadmap paper, led by CERN, DESY and IBM, we provide the status of high-energy physics quantum computations and give examples for theoretical and experimental target benchmark applications, which can be addressed in the near future. Having the IBM 100 x 100 challenge in mind, where possible, we also provide resource estimates for the examples given using error mitigated quantum computing

    Increasing productivity in High Energy Physics data mining with a Domain Specific Visual Query Language

    Get PDF
    Diese Arbeit entwickelt die erste anwendungsspezifische visuelle Anfragesprache fĂŒr Hochenergiephysik. Nach dem aktuellen Stand der Technik ist Analyse von experimentellen Ergebnissen in der Hochenergiephysik ein sehr aufwendiger Vorgang. Die Verwendung allgemeiner höherer Programmiersprachen und komplexer Bibliotheken fĂŒr die Erstellung und Wartung der Auswertungssoftware lenkt die Wissenschaftler von den Kernfragen ihres Gebiets ab. Unser Ansatz fĂŒhrt eine neue Abstraktionsebene in Form einer visuellen Programmiersprache ein, in der die Physiker die gewĂŒnschten Ergebnisse in einer ihrem Anwendungsgebiet nahen Notation formulieren können. Die Validierung der Hypothese erfolgte durch die Entwicklung einer Sprache und eines Software-Prototyps. Neben einer formalen Syntax wird die Sprache durch eine translationale Semantik definiert. Die Semantik wird dabei mittels einer Übersetzung in eine durch spezielle Gruppierungsoperatoren erweiterte NF2-Algebra spezifiziert. Die vom Benutzer erstellten visuellen Anfragen werden durch einen Compiler in Code fĂŒr eine Zielplattform ĂŒbersetzt. Die Benutzbarkeit der Sprache wurde durch eine Benutzerstudie validiert, deren qualitative und quantitative Ergebnisse vorgestellt werden

    NASA/American Society for Engineering Education (ASEE) Summer Faculty Fellowship Program 1992

    Get PDF
    Since 1964, the National Aeronautics and Space Administration (NASA) has supported a program of summer faculty fellowships for engineering and science educators. In a series of collaborations between NASA research and development centers and nearby universities, engineering faculty members spend 10 weeks working with professional peers on research. The Summer Faculty Program Committee of the American Society for Engineering Education supervises the programs. Objectives of the program are (1) to further the professional knowledge of qualified engineering and science faculty members; (2) to stimulate and exchange ideas between participants and NASA; (3) to enrich and refresh the research and teaching activities of participants' institutions; and (4) to contribute to the research objectives of the NASA center
    • 

    corecore