602 research outputs found

    On high performance computing in geodesy : applications in global gravity field determination

    Get PDF
    Autonomously working sensor platforms deliver an increasing amount of precise data sets, which are often usable in geodetic applications. Due to the volume and quality, models determined from the data can be parameterized more complex and in more detail. To derive model parameters from these observations, the solution of a high dimensional inverse data fitting problem is often required. To solve such high dimensional adjustment problems, this thesis proposes a systematical, end-to-end use of a massive parallel implementation of the geodetic data analysis, using standard concepts of massive parallel high performance computing. It is shown how these concepts can be integrated into a typical geodetic problem, which requires the solution of a high dimensional adjustment problem. Due to the proposed parallel use of the computing and memory resources of a compute cluster it is shown, how general Gauss-Markoff models become solvable, which were only solvable by means of computationally motivated simplifications and approximations before. A basic, easy-to-use framework is developed, which is able to perform all relevant operations needed to solve a typical geodetic least squares adjustment problem. It provides the interface to the standard concepts and libraries used. Examples, including different characteristics of the adjustment problem, show how the framework is used and can be adapted for specific applications. In a computational sense rigorous solutions become possible for hundreds of thousands to millions of unknown parameters, which have to be estimated from a huge number of observations. Three special problems with different characteristics, as they arise in global gravity field recovery, are chosen and massive parallel implementations of the solution processes are derived. The first application covers global gravity field determination from real data as collected by the GOCE satellite mission (comprising 440 million highly correlated observations, 80,000 parameters). Within the second application high dimensional global gravity field models are estimated from the combination of complementary data sets via the assembly and solution of full normal equations (scenarios with 520,000 parameters, 2 TB normal equations). The third application solves a comparable problem, but uses an iterative least squares solver, allowing for a parameter space of even higher dimension (now considering scenarios with two million parameters). This thesis forms the basis for a flexible massive parallel software package, which is extendable according to further current and future research topics studied in the department. Within this thesis, the main focus lies on the computational aspects.Autonom arbeitende Sensorplattformen liefern prĂ€zise geodĂ€tisch nutzbare DatensĂ€tze in grĂ¶ĂŸer werdendem Umfang. Deren Menge und QualitĂ€t fĂŒhrt dazu, dass Modelle die aus den Beobachtungen abgeleitet werden, immer komplexer und detailreicher angesetzt werden können. Zur Bestimmung von Modellparametern aus den Beobachtungen gilt es oftmals, ein hochdimensionales inverses Problem im Sinne der Ausgleichungsrechnung zu lösen. Innerhalb dieser Arbeit soll ein Beitrag dazu geleistet werden, Methoden und Konzepte aus dem Hochleistungsrechnen in der geodĂ€tischen Datenanalyse strukturiert, durchgĂ€ngig und konsequent zu verwenden. Diese Arbeit zeigt, wie sich diese nutzen lassen, um geodĂ€tische Fragestellungen, die ein hochdimensionales Ausgleichungsproblem beinhalten, zu lösen. Durch die gemeinsame Nutzung der Rechen- und Speicherressourcen eines massiv parallelen Rechenclusters werden Gauss-Markoff Modelle lösbar, die ohne den Einsatz solcher Techniken vorher höchstens mit massiven Approximationen und Vereinfachungen lösbar waren. Ein entwickeltes GrundgerĂŒst stellt die Schnittstelle zu den massiv parallelen Standards dar, die im Rahmen einer numerischen Lösung von typischen Ausgleichungsaufgaben benötigt werden. Konkrete Anwendungen mit unterschiedlichen Charakteristiken zeigen das detaillierte Vorgehen um das GrundgerĂŒst zu verwenden und zu spezifizieren. Rechentechnisch strenge Lösungen sind so fĂŒr Hunderttausende bis Millionen von unbekannten Parametern möglich, die aus einer Vielzahl von Beobachtungen geschĂ€tzt werden. Drei spezielle Anwendungen aus dem Bereich der globalen Bestimmung des Erdschwerefeldes werden vorgestellt und die Implementierungen fĂŒr einen massiv parallelen Hochleistungsrechner abgeleitet. Die erste Anwendung beinhaltet die Bestimmung von Schwerefeldmodellen aus realen Beobachtungen der Satellitenmission GOCE (welche 440 Millionen korrelierte Beobachtungen umfasst, 80,000 Parameter). In der zweite Anwendung werden globale hochdimensionale Schwerefelder aus komplementĂ€ren Daten ĂŒber das Aufstellen und Lösen von vollen Normalgleichungen geschĂ€tzt (basierend auf Szenarien mit 520,000 Parametern, 2 TB Normalgleichungen). Die dritte Anwendung löst dasselbe Problem, jedoch ĂŒber einen iterativen Löser, wodurch der Parameterraum noch einmal deutlich höher dimensional sein kann (betrachtet werden nun Szenarien mit 2 Millionen Parametern). Die Arbeit bildet die Grundlage fĂŒr ein massiv paralleles Softwarepaket, welches schrittweise um Spezialisierungen, abhĂ€ngig von aktuellen Forschungsprojekten in der Arbeitsgruppe, erweitert werden wird. Innerhalb dieser Arbeit liegt der Fokus rein auf den rechentechnischen Aspekten

    On High Performance Computing in Geodesy : Applications in Global Gravity Field Determination

    Get PDF
    Autonomously working sensor platforms deliver an increasing amount of precise data sets, which are often usable in geodetic applications. Due to the volume and quality, models determined from the data can be parameterized more complex and in more detail. To derive model parameters from these observations, the solution of a high dimensional inverse data fitting problem is often required. To solve such high dimensional adjustment problems, this thesis proposes a systematical, end-to-end use of a massive parallel implementation of the geodetic data analysis, using standard concepts of massive parallel high performance computing. It is shown how these concepts can be integrated into a typical geodetic problem, which requires the solution of a high dimensional adjustment problem. Due to the proposed parallel use of the computing and memory resources of a compute cluster it is shown, how general Gauss-Markoff models become solvable, which were only solvable by means of computationally motivated simplifications and approximations before. A basic, easy-to-use framework is developed, which is able to perform all relevant operations needed to solve a typical geodetic least squares adjustment problem. It provides the interface to the standard concepts and libraries used. Examples, including different characteristics of the adjustment problem, show how the framework is used and can be adapted for specific applications. In a computational sense rigorous solutions become possible for hundreds of thousands to millions of unknown parameters, which have to be estimated from a huge number of observations. Three special problems with different characteristics, as they arise in global gravity field recovery, are chosen and massive parallel implementations of the solution processes are derived. The first application covers global gravity field determination from real data as collected by the GOCE satellite mission (comprising 440 million highly correlated observations, 80,000 parameters). Within the second application high dimensional global gravity field models are estimated from the combination of complementary data sets via the assembly and solution of full normal equations (scenarios with 520,000 parameters, 2 TB normal equations). The third application solves a comparable problem, but uses an iterative least squares solver, allowing for a parameter space of even higher dimension (now considering scenarios with two million parameters). This thesis forms the basis for a flexible massive parallel software package, which is extendable according to further current and future research topics studied in the department. Within this thesis, the main focus lies on the computational aspects.Autonom arbeitende Sensorplattformen liefern prĂ€zise geodĂ€tisch nutzbare DatensĂ€tze in grĂ¶ĂŸer werdendem Umfang. Deren Menge und QualitĂ€t fĂŒhrt dazu, dass Modelle die aus den Beobachtungen abgeleitet werden, immer komplexer und detailreicher angesetzt werden können. Zur Bestimmung von Modellparametern aus den Beobachtungen gilt es oftmals, ein hochdimensionales inverses Problem im Sinne der Ausgleichungsrechnung zu lösen. Innerhalb dieser Arbeit soll ein Beitrag dazu geleistet werden, Methoden und Konzepte aus dem Hochleistungsrechnen in der geodĂ€tischen Datenanalyse strukturiert, durchgĂ€ngig und konsequent zu verwenden. Diese Arbeit zeigt, wie sich diese nutzen lassen, um geodĂ€tische Fragestellungen, die ein hochdimensionales Ausgleichungsproblem beinhalten, zu lösen. Durch die gemeinsame Nutzung der Rechen- und Speicherressourcen eines massiv parallelen Rechenclusters werden Gauss-Markoff Modelle lösbar, die ohne den Einsatz solcher Techniken vorher höchstens mit massiven Approximationen und Vereinfachungen lösbar waren. Ein entwickeltes GrundgerĂŒst stellt die Schnittstelle zu den massiv parallelen Standards dar, die im Rahmen einer numerischen Lösung von typischen Ausgleichungsaufgaben benötigt werden. Konkrete Anwendungen mit unterschiedlichen Charakteristiken zeigen das detaillierte Vorgehen um das GrundgerĂŒst zu verwenden und zu spezifizieren. Rechentechnisch strenge Lösungen sind so fĂŒr Hunderttausende bis Millionen von unbekannten Parametern möglich, die aus einer Vielzahl von Beobachtungen geschĂ€tzt werden. Drei spezielle Anwendungen aus dem Bereich der globalen Bestimmung des Erdschwerefeldes werden vorgestellt und die Implementierungen fĂŒr einen massiv parallelen Hochleistungsrechner abgeleitet. Die erste Anwendung beinhaltet die Bestimmung von Schwerefeldmodellen aus realen Beobachtungen der Satellitenmission GOCE (welche 440 Millionen korrelierte Beobachtungen umfasst, 80,000 Parameter). In der zweite Anwendung werden globale hochdimensionale Schwerefelder aus komplementĂ€ren Daten ĂŒber das Aufstellen und Lösen von vollen Normalgleichungen geschĂ€tzt (basierend auf Szenarien mit 520,000 Parametern, 2 TB Normalgleichungen). Die dritte Anwendung löst dasselbe Problem, jedoch ĂŒber einen iterativen Löser, wodurch der Parameterraum noch einmal deutlich höher dimensional sein kann (betrachtet werden nun Szenarien mit 2 Millionen Parametern). Die Arbeit bildet die Grundlage fĂŒr ein massiv paralleles Softwarepaket, welches schrittweise um Spezialisierungen, abhĂ€ngig von aktuellen Forschungsprojekten in der Arbeitsgruppe, erweitert werden wird. Innerhalb dieser Arbeit liegt der Fokus rein auf den rechentechnischen Aspekten

    Is the air change efficiency sufficient to assess the removal of airborne contamination in mixing ventilation?

    Get PDF
    This investigation analyze the correlation between two common methods to assess the ventilation effectiveness: An averaged contamination removal effectiveness (CRE) value based on the residual lifetime and the air change efficiency (ACE) to better understand their relationship to then give a recommendation for the IAQ-assessment of ventilation designs. The present numerical investigation puts focus on a simple mixing ventilation scenario with different conditions: air change rate, specific heat flux, supply air diffuser and exhaust position. Statistically, the results show a significant correlation. A detailed consideration, especially for the partial load range, will be necessary to for a valide determination of removing airborne contaminationpublishedVersio

    Quasi-soliton scattering in quantum spin chains

    Get PDF
    The quantum scattering of magnon bound states in the anisotropic Heisenberg spin chain is shown to display features similar to the scattering of solitons in classical exactly solvable models. Localized colliding Gaussian wave packets of bound magnons are constructed from string solutions of the Bethe equations and subsequently evolved in time, relying on an algebraic Bethe ansatz based framework for the computation of local expectation values in real space-time. The local magnetization profile shows the trajectories of colliding wave packets of bound magnons, which obtain a spatial displacement upon scattering. Analytic predictions on the displacements for various values of anisotropy and string lengths are derived from scattering theory and Bethe ansatz phase shifts, matching time evolution fits on the displacements. The time evolved block decimation (TEBD) algorithm allows for the study of scattering displacements from spin-block states, showing similar scattering displacement features.Comment: 15 pages, 7 figures. (v2: citations added

    The Effects of Relative Humidity and Compound Interaction on the Adsorption of Mineral Oil Vapor by Activated Carbon

    Get PDF
    A bench-scale carbon adsorption system was constructed to examine the effects of relative humidity and compound interaction on the adsorption of specific components of mineral oil vapor by activated carbon. Small beds of activated carbon were challenged with single and binary component vapor at low and high relative humidities (less than 13 percent and greater than 87 percent). The components consisted of undecane and dodecane at nominal concentrations of 4 ppm each. High relative humidity and compound interaction substantially reduced the carbon's adsorption capacity for both compounds, however, the effect of the latter was more pronounced on the adsorption of undecane versus dodecane. The equilibrium adsorption capacity of the carbon for the single and binary component vapor at high relative humidity was approximately 0.20 grams of vapor per gram of carbon. This adsorption capacity was 24 to 28percent of the capacities quoted by the carbon vendor for similar compounds.Master of Science in Environmental Engineerin

    Ultrastructure of the Membrana Limitans Interna after Dye-Assisted Membrane Peeling

    Get PDF
    The purpose of this study was to investigate the ultrastructure of the membrana limitans interna (internal limiting membrane, ILM) and to evaluate alterations to the retinal cell layers after membrane peeling with vital dyes. Twenty-five patients (25 eyes) who underwent macular hole surgery were included, whereby 12 indocyanine green (ICG)- and 13 brilliant blue G (BBG)-stained ILM were analyzed using light, transmission electron and scanning electron microscopy. Retinal cell fragments on the ILM were identified in both groups using immunohistochemistry. Comparing ICG- and BBG-stained membranes, larger cellular fragments were observed at a higher frequency in the BBG group. Thereby, the findings indicate that ICG permits an enhanced separation of the ILM from the underlying retina with less mechanical destruction. A possible explanation might be seen in the known photosensitivity of ICG, which induces a stiffening and shrinkage of the ILM but also generates retinal toxic metabolite

    Collective Excitations and Ground State Correlations

    Full text link
    A generalized RPA formalism is presented which treats pp and ph correlations on an equal footing. The effect of these correlations on the single-particle Green function is discussed and it is demonstrated that a self-consistent treatment of the single-particle Green function is required to obtain stable solutions. A simple approximation scheme is presented which incorporates for this self-consistency requirement and conserves the number of particles. Results of numerical calculations are given for 16^{16}O using a G-matrix interaction derived from a realistic One-Boson-Exchange potential.Comment: 16 Pages + 2 Figures (included at the end as uuencoded ps-files), TU-18089

    Photoproduction of the Lambda(1405) on the proton and nuclei

    Full text link
    We study the gamma p ---> K^+ Lambda(1405) reaction at energies close to threshold using a chiral unitary model where the resonance is generated dynamically from K^-p interaction with other channels constructed from the octets of baryons and mesons. Predictions are made for cross sections into several channels and it is shown that the detection of the K^+ is sufficient to determine the shape and strength of the Lambda(1405) resonance. The determination of the resonance properties in nuclei requires instead the detection of the resonance decay channels. Pauli blocking effects on the resonance, which have been shown to be very important for the resonance at rest in the nucleus, are irrelevant here where the resonance is produced with a large momentum. The nuclear modifications here would thus offer information on the resonance and K^- nucleus dynamics complementary to the one offered so far by K^- atoms.Comment: 9 pages, 4 postscripts figure

    The complete conformal spectrum of a sl(2∣1)sl(2|1) invariant network model and logarithmic corrections

    Full text link
    We investigate the low temperature asymptotics and the finite size spectrum of a class of Temperley-Lieb models. As reference system we use the spin-1/2 Heisenberg chain with anisotropy parameter Δ\Delta and twisted boundary conditions. Special emphasis is placed on the study of logarithmic corrections appearing in the case of Δ=1/2\Delta=1/2 in the bulk susceptibility data and in the low-energy spectrum yielding the conformal dimensions. For the sl(2∣1)sl(2|1) invariant 3-state representation of the Temperley-Lieb algebra with Δ=1/2\Delta=1/2 we give the complete set of scaling dimensions which show huge degeneracies.Comment: 18 pages, 5 figure
    • 

    corecore