10 research outputs found
Massively parallel split-step Fourier techniques for simulating quantum systems on graphics processing units
The split-step Fourier method is a powerful technique for solving partial differential equations and simulating ultracold atomic systems of various forms. In this body of work, we focus on several variations of this method to allow for simulations of one, two, and three-dimensional quantum systems, along with several notable methods for controlling these systems. In particular, we use quantum optimal control and shortcuts to adiabaticity to study the non-adiabatic generation of superposition states in strongly correlated one-dimensional systems, analyze chaotic vortex trajectories in two dimensions by using rotation and phase imprinting methods, and create stable, threedimensional vortex structures in Bose–Einstein condensates through artificial magnetic fields generated by the evanescent field of an optical nanofiber. We also discuss algorithmic optimizations for implementing the split-step Fourier method on graphics processing units. All computational methods present in this work are demonstrated on physical systems and have been incorporated into a state-of-the-art and open-source software suite known as GPUE, which is currently the fastest quantum simulator of its kind.Okinawa Institute of Science and Technology Graduate Universit
Understanding Quantum Technologies 2022
Understanding Quantum Technologies 2022 is a creative-commons ebook that
provides a unique 360 degrees overview of quantum technologies from science and
technology to geopolitical and societal issues. It covers quantum physics
history, quantum physics 101, gate-based quantum computing, quantum computing
engineering (including quantum error corrections and quantum computing
energetics), quantum computing hardware (all qubit types, including quantum
annealing and quantum simulation paradigms, history, science, research,
implementation and vendors), quantum enabling technologies (cryogenics, control
electronics, photonics, components fabs, raw materials), quantum computing
algorithms, software development tools and use cases, unconventional computing
(potential alternatives to quantum and classical computing), quantum
telecommunications and cryptography, quantum sensing, quantum technologies
around the world, quantum technologies societal impact and even quantum fake
sciences. The main audience are computer science engineers, developers and IT
specialists as well as quantum scientists and students who want to acquire a
global view of how quantum technologies work, and particularly quantum
computing. This version is an extensive update to the 2021 edition published in
October 2021.Comment: 1132 pages, 920 figures, Letter forma
The readying of applications for heterogeneous computing
High performance computing is approaching a potentially significant change in architectural design. With pressures on the cost and sheer amount of power, additional architectural features are emerging which require a re-think to the programming models deployed over the last two decades.
Today's emerging high performance computing (HPC) systems are maximising performance per unit of power consumed resulting in the constituent parts of the system to be made up of a range of different specialised building blocks, each with their own purpose. This heterogeneity is not just limited to the hardware components but also in the mechanisms that exploit the hardware components. These multiple levels of parallelism, instruction sets and memory hierarchies, result in truly heterogeneous computing in all aspects of the global system.
These emerging architectural solutions will require the software to exploit tremendous amounts of on-node parallelism and indeed programming models to address this are emerging. In theory, the application developer can design new software using these models to exploit emerging low power architectures. However, in practice, real industrial scale applications last the lifetimes of many architectural generations and therefore require a migration path to these next generation supercomputing platforms.
Identifying that migration path is non-trivial: With applications spanning many decades, consisting of many millions of lines of code and multiple scientific algorithms, any changes to the programming model will be extensive and invasive and may turn out to be the incorrect model for the application in question.
This makes exploration of these emerging architectures and programming models using the applications themselves problematic. Additionally, the source code of many industrial applications is not available either due to commercial or security sensitivity constraints.
This thesis highlights this problem by assessing current and emerging hard- ware with an industrial strength code, and demonstrating those issues described. In turn it looks at the methodology of using proxy applications in place of real industry applications, to assess their suitability on the next generation of low power HPC offerings. It shows there are significant benefits to be realised in using proxy applications, in that fundamental issues inhibiting exploration of a particular architecture are easier to identify and hence address.
Evaluations of the maturity and performance portability are explored for a number of alternative programming methodologies, on a number of architectures and highlighting the broader adoption of these proxy applications, both within the authors own organisation, and across the industry as a whole
The impact of the stellar evolution of single and binary stars on the global, dynamical evolution of dense star clusters across cosmic time
Sternhaufen im Universum stellen dichte, selbstgravitierende und typischerweise dynamisch kollidierende
Umgebungen dar, die aus Tausenden bis Millionen von Sternen bestehen. Sie bevölkern galaktische Scheiben,
Halos und sogar galaktische Zentren im gesamten Kosmos und bilden eine grundlegende Einheit in einer
Hierarchie der kosmischen Strukturbildung. Außerdem sind sie in der Regel viel dichter als ihre Wirtsgalaxie, was sie zu unglaublich faszinierenden astronomischen Objekten macht. Anders als ihre Umgebung
erleben Sterne und kompakte Objekte in Sternhaufen häufige dynamische Streuungen, bilden dynamische
Doppelsterne, verschmelzen unter Aussendung von Gravitationswellen, werden durch Dreikörperdynamik
herausgeschleudert und stoßen in seltenen Fällen sogar direkt zusammen. Infolgedessen sind Sternhaufen
Fabriken aller exotischen Doppelsterne, von z.B. Thorne-Zytkow-Objekten und kataklysmischen Variablen
bis hin zu kompakten Doppelsternen, beispielsweise Doppelsterne, die aus schwarzen Löchern und Neutronensternen bestehen. Darüber hinaus fangen mit zunehmender Teilchenzahl einzigartige Gravitationseffekte
von kollidierenden Vielteilchensystemen an die frühe Entwicklung des Haufens zu dominieren, die zu
zusammenziehenden und zunehmend schneller rotierenden Kernen der Sternhaufen führen, die bevorzugt
massereiche Sterne und kompakte Objeckte sowie Doppelsterne enthalten, und einem sich ausdehnenden
Halo aus Sternen und kompakten Objekten geringerer Masse. Sternhaufen sind daher nicht nur ein Labor für
die Gravitationsvielteilchenphysik, sondern auch für die Sternentwicklung von Einzel- und Doppelsternen
sowie hierarchischen Sternensystemen höherer Ordnung. Alle diese physikalischen Prozesse können nicht
isoliert betrachtet werden - sie verstärken sich in Sternhaufen gegenseitig und viele passieren auf ähnlichen
Zeitskalen. In dieser Arbeit möchte ich den Einfluss der Sternentwicklung auf die globale Dynamik von
Sternhaufen mit Hilfe von direkten gravitativen N-Körper und Hénon-Typ Monte-Carlo Simulationen von
Sternhaufen genauer studieren. Ich konzentriere mich auf die Entwicklung von metallarmen Sternpopulationen (Population II), die in Kugelsternhaufen und extrem metallarme Sternpopulationen (Population III), die
die ältesten Sternpopulationen im Universum bilden