18 research outputs found
Computational performance of Free Mesh Method applied to continuum mechanics problems
The free mesh method (FMM) is a kind of the meshless methods intended for particle-like finite element analysis of problems that are difficult to handle using global mesh generation, or a node-based finite element method that employs a local mesh generation technique and a node-by-node algorithm. The aim of the present paper is to review some unique numerical solutions of fluid and solid mechanics by employing FMM as well as the Enriched Free Mesh Method (EFMM), which is a new version of FMM, including compressible flow and sounding mechanism in air-reed instruments as applications to fluid mechanics, and automatic remeshing for slow crack growth, dynamic behavior of solid as well as large-scale Eigen-frequency of engine block as applications to solid mechanics
Computational performance of Free Mesh Method applied to continuum mechanics problems
The free mesh method (FMM) is a kind of the meshless methods intended for particle-like finite element analysis of problems that are difficult to handle using global mesh generation, or a node-based finite element method that employs a local mesh generation technique and a node-by-node algorithm. The aim of the present paper is to review some unique numerical solutions of fluid and solid mechanics by employing FMM as well as the Enriched Free Mesh Method (EFMM), which is a new version of FMM, including compressible flow and sounding mechanism in air-reed instruments as applications to fluid mechanics, and automatic remeshing for slow crack growth, dynamic behavior of solid as well as large-scale Eigen-frequency of engine block as applications to solid mechanics
Alkali Rydberg States in Electromagnetic Fields
We study highly excited hydrogen and alkali atoms (’Rydberg states’) under the
influence of a strong microwave field. As the external frequency is comparable to
the highly excited electron’s classical Kepler frequency, the external field induces
a strong coupling of many different quantum mechanical energy levels and finally
leads to the ionization of the outer electron. While periodically driven atomic hydrogen
can be seen as a paradigm of quantum chaotic motion in an open (decaying)
quantum system, the presence of the non-hydrogenic atomic core – which unavoidably
has to be treated quantum mechanically – entails some complications. Indeed,
laboratory experiments show clear differences in the ionization dynamics of microwave
driven hydrogen and non-hydrogenic Rydberg states.
In the first part of this thesis, a machinery is developed that allows for numerical
experiments on alkali and hydrogen atoms under precisely identical laboratory
conditions. Due to the high density of states in the parameter regime typically explored
in laboratory experiments, such simulations are only possible with the most
advanced parallel computing facilities, in combination with an efficient parallel implementation
of the numerical approach.
The second part of the thesis is devoted to the results of the numerical experiment.
We identify and describe significant differences and surprising similarities
in the ionization dynamics of atomic hydrogen as compared to alkali atoms, and
give account of the relevant frequency scales that distinguish hydrogenic from nonhydrogenic
ionization behavior. Our results necessitate a reinterpretation of the
experimental results so far available, and solve the puzzle of a distinct ionization
behavior of periodically driven hydrogen and non-hydrogenic Rydberg atoms – an
unresolved question for about one decade.
Finally, microwave-driven Rydberg states will be considered as prototypes of
open, complex quantum systems that exhibit a complicated temporal decay. However,
we find considerable differences in the decay of such real and experimentally
accessible atomic systems, as opposed to predictions based on the study of quantum
maps or other toy models with mixed regular-chaotic classical counterparts.Wir untersuchen hochangeregte Wasserstoff- und Alkaliatome (
”
Rydbergatome“)
unter dem Einfluß eines starken Mikrowellenfeldes. Das aeußere Feld, dessen Frequenz
von der Groeßenordnung der klassischen Keplerfrequenz des Valenzelektrons
ist, bewirkt eine starke Kopplung vieler verschiedener quantenmechanischer Energieniveaus
und fuehrt schließlich zur Ionisation des aeußeren Elektrons.Waehrend periodisch
getriebeneWasserstoffatome als ein Paradebeispiel quantenchaotischen Verhaltens
in einem offenen (zerfallenden) System angesehen werden koennen, bringt
ein nicht-wasserstoffartiger Atomrumpf, der als ein rein quantenmechanisches Objekt
zu betrachten ist, einige Komplikationen mit sich. Tatsaechlich zeigen Experimente
an verschiedenen Elementen deutliche Unterschiede im Ionisationsverhalten
von Wasserstoff- und Alkaliatomen im Mikrowellenfeld.
Im ersten Teil dieser Arbeit wird ein theoretisch-numerischer Apparat entwickelt,
der es ermoeglicht, numerische Experimente sowohl an Wasserstoff als auch
an Alkaliatomen unter exakt den gleichen Laborbedingungen durchzufuehren. Aufgrund
der hohen Niveaudichte der periodisch getriebenen, dreidimensionalen Atome
im Bereich typischer experimenteller Parameter sind solche Simulationen nur
mit Hilfe modernster Parallelrechner in Verbindung mit einer effizienten parallelen
Implementierung unseres numerischen Verfahrens moeglich.
Im zweiten Teil der Arbeit werden die Ergebnisse des numerischen Experiments
vorgestellt und diskutiert. Wir finden ebenso deutliche Unterschiede wie ueberraschende
Gemeinsamkeiten im Ionisationsverhalten von Wasserstoff- und Alkaliatomen
und koennen jene Frequenzbereiche identifizieren, in welchen Alkaliatome
wasserstoff- bzw. nicht-wasserstoffartiges Ionisationsverhalten zeigen. Unsere Resultate
erzwingen die Neuinterpretation eines großen Teils der vorhandenen experimentellen
Daten und erlauben es insbesondere, das seit ca. einem Jahrzehnt ungel
oeste Problem des deutlich unterschiedlichen Ionisationsverhaltens verschiedener
atomarer Spezies unter dem Einfluß eines elektromagnetischen Feldes zu loesen.
Schließlich betrachten wir periodisch getriebene Rydbergatome als ein typisches
offenes, komplexes Quantensystem, das einen komplizierten zeitlichen Zerfall zeigt.
Insbesondere finden wir im Zerfall dieses realen atomaren Systems qualitative wie
quantitative Unterschiede zu Vorhersagen, die auf Untersuchungen quantenmechanischer
Abbildungen mit gemischt regulaer-chaotischem klassischen Analogon beruhen
A Performance Comparison Using HPC Benchmarks: Windows HPC Server 2008 and Red Hat Enterprise Linux 5
This document was developed with support from the National Science Foundation (NSF) under Grant No. 0910812 to Indiana University for ”FutureGrid: An Experimental, High-Performance Grid Test-bed.” Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the NSF.A collection of performance benchmarks have been run on an IBM System X iDataPlex cluster using two different operating systems. Windows HPC Server 2008 (WinHPC) and Red Hat Enterprise Linux v5.4 (RHEL5) are compared using SPEC MPI2007 v1.1, the High Performance Computing Challenge (HPCC) and National Science Foundation (NSF) acceptance test benchmark suites. Overall, we find the performance of WinHPC and RHEL5 to be equivalent but significant performance differences exist when analyzing specific applications. We focus on presenting the results from the application benchmarks and include the results of the HPCC microbenchmark for completeness
Numerical Simulations of Earthquake Scenarios in the Lower Rhine Embayment Area
The choice of the Lower Rhine Embayment as study area for strong ground motion modeling may be puzzling at first glance. This region in the northwest of the European continent is characterized by active tectonics on a complex system of fault-zones with relatively low deformation rates. Consequently, the area has shown low to moderate seismicity in the time frame covered by observational seismology. However, historical and geological evidence proves that the fault systems of the Lower Rhine Embayment have the potential of producing large earthquakes with magnitudes 6 and above accompanied by surface rupture. The presence of large sediment deposits in this region leads to local amplification of ground motion with large lateral variations. Dense population and an agglomeration of industry results in an elevated seismic risk.
Assessment of seismic hazard in regions characterized by low recent seismicity is afflicted with large uncertainties. This is mainly due to the dearth of observational data on strong ground motions associated with large earthquakes. Numerical simulations of earthquake scenarios can account for estimates on peak ground motion and waveforms and therefore help closing this gap.
Naturally the first step consists in accurate reproduction of the few observed events. An additional crucial quantity is the range of variations of the simulation results within the uncertainty margins associated with input parameters. Knowledge about this behavior enlarges the significance of numerical simulation results.
Four historical and recent earthquake scenarios are modeled using a finite difference approach. Results are analyzed with special emphasis given to their intrinsic variability with model complexity and simulation settings. The choice of investigated parameters is adopted to the differing scope of observational data available for the individual events. In general encouraging similarity between synthetic and observed ground motions is found, even when a simplified model is used. However, detailed investigation carried out for the most recent earthquake scenario - the magnitude 4.9 July 22 2002 Alsdorf event - strongly suggests the significance of an appropriate source description and the modeling of anelastic behavior on simulation results. Finally a web-based application for storage and visualization of synthetic ground motion data is presented
Analytical Modeling of High Performance Reconfigurable Computers: Prediction and Analysis of System Performance.
The use of a network of shared, heterogeneous workstations each harboring a Reconfigurable Computing (RC) system offers high performance users an inexpensive platform for a wide range of computationally demanding problems. However, effectively using the full potential of these systems can be challenging without the knowledge of the system’s performance characteristics. While some performance models exist for shared, heterogeneous workstations, none thus far account for the addition of Reconfigurable Computing systems. This dissertation develops and validates an analytic performance modeling methodology for a class of fork-join algorithms executing on a High Performance Reconfigurable Computing (HPRC) platform. The model includes the effects of the reconfigurable device, application load imbalance, background user load, basic message passing communication, and processor heterogeneity. Three fork-join class of applications, a Boolean Satisfiability Solver, a Matrix-Vector Multiplication algorithm, and an Advanced Encryption Standard algorithm are used to validate the model with homogeneous and simulated heterogeneous workstations. A synthetic load is used to validate the model under various loading conditions including simulating heterogeneity by making some workstations appear slower than others by the use of background loading. The performance modeling methodology proves to be accurate in characterizing the effects of reconfigurable devices, application load imbalance, background user load and heterogeneity for applications running on shared, homogeneous and heterogeneous HPRC resources. The model error in all cases was found to be less than five percent for application runtimes greater than thirty seconds and less than fifteen percent for runtimes less than thirty seconds. The performance modeling methodology enables us to characterize applications running on shared HPRC resources. Cost functions are used to impose system usage policies and the results of vii the modeling methodology are utilized to find the optimal (or near-optimal) set of workstations to use for a given application. The usage policies investigated include determining the computational costs for the workstations and balancing the priority of the background user load with the parallel application. The applications studied fall within the Master-Worker paradigm and are well suited for a grid computing approach. A method for using NetSolve, a grid middleware, with the model and cost functions is introduced whereby users can produce optimal workstation sets and schedules for Master-Worker applications running on shared HPRC resources