93 research outputs found
Lattice-Boltzmann simulations of cerebral blood flow
Computational haemodynamics play a central role in the understanding of blood behaviour
in the cerebral vasculature, increasing our knowledge in the onset of vascular
diseases and their progression, improving diagnosis and ultimately providing better
patient prognosis. Computer simulations hold the potential of accurately characterising
motion of blood and its interaction with the vessel wall, providing the capability to
assess surgical treatments with no danger to the patient. These aspects considerably
contribute to better understand of blood circulation processes as well as to augment
pre-treatment planning. Existing software environments for treatment planning consist
of several stages, each requiring significant user interaction and processing time,
significantly limiting their use in clinical scenarios.
The aim of this PhD is to provide clinicians and researchers with a tool to aid
in the understanding of human cerebral haemodynamics. This tool employs a high
performance
fluid solver based on the lattice-Boltzmann method (coined HemeLB),
high performance distributed computing and grid computing, and various advanced
software applications useful to efficiently set up and run patient-specific simulations.
A graphical tool is used to segment the vasculature from patient-specific CT or MR
data and configure boundary conditions with ease, creating models of the vasculature
in real time. Blood flow visualisation is done in real time using in situ rendering
techniques implemented within the parallel
fluid solver and aided by steering capabilities;
these programming strategies allows the clinician to interactively display the
simulation results on a local workstation. A separate software application is used
to numerically compare simulation results carried out at different spatial resolutions,
providing a strategy to approach numerical validation. This developed software and
supporting computational infrastructure was used to study various patient-specific
intracranial aneurysms with the collaborating interventionalists at the National Hospital
for Neurology and Neuroscience (London), using three-dimensional rotational
angiography data to define the patient-specific vasculature. Blood flow motion was
depicted in detail by the visualisation capabilities, clearly showing vortex fluid
ow features and stress distribution at the inner surface of the aneurysms and their surrounding
vasculature. These investigations permitted the clinicians to rapidly assess
the risk associated with the growth and rupture of each aneurysm. The ultimate goal
of this work is to aid clinical practice with an efficient easy-to-use toolkit for real-time
decision support
Parallel algorithms and efficient implementation techniques for finite element approximations
In this thesis we study the efficient implementation of the finite element method for the numerical solution of partial differential equations (PDE) on modern parallel computer archi- tectures, such as Cray and IBM supercomputers. The domain-decomposition (DD) method represents the basis of parallel finite element software and is generally implemented such that the number of subdomains is equal to the number of MPI processes. We are interested in breaking this paradigm by introducing a second level of parallelism. Each subdomain is assigned to more than one processor and either MPI processes or multiple threads are used to implement the parallelism on the second level. The thesis is devoted to the study of this second level of parallelism and includes the stages described below. The algebraic additive Schwarz (AAS) domain-decomposition preconditioner is an integral part of the solution process. We seek to understand its performance on the parallel computers which we target and we introduce an improved construction approach for the parallel precon- ditioner. We examine a novel strategy for solving the AAS subdomain problems, using multiple MPI processes. At the subdomain level, this is represented by the ShyLU preconditioner. We bring improvements to its algorithm in the form of a novel inexact solver based on an incomplete QR (IQR) factorization. The performance of the new preconditioner framework is studied for Laplacian and advection-diffusion-reaction (ADR) problems and for Navier-Stokes problems, as a component within a larger framework of specialized preconditioners. The partitioning of the computational mesh comes with considerable memory limitations, when done at runtime on parallel computers, due to the low amount of available memory per processor. We describe and implement a solution to this problem, based on offloading the partitioning process to a preliminary offline stage of the simulation process. We also present the efficient implementation, based on parallel MPI collective instructions, of the routines which load the mesh parts during the simulation. We discuss an alternative parallel implementation of the finite element system assembly based on multi-threading. This new approach is used to supplement the existing one based on MPI parallelism, in situations where MPI alone can not make use of all the available parallel hardware resources. The work presented in the thesis has been done in the framework of two software projects: the Trilinos project and the LifeV parallel finite element modeling library. All the new develop- ments have been contributed back to the respective projects, to be used freely in subsequent public releases of the software
Towards brain-scale modelling of the human cerebral blood flow : hybrid approach and high performance computing
The brain microcirculation plays a key role in cerebral physiology and neuronal activation. In the case of degenerative diseases such as Alzheimerâs, severe deterioration of the microvascular networks (e.g. vascular occlusions) limit blood flow, thus oxygen and nutrients supply, to the cortex, eventually resulting in neurons death. In addition to functional neuroimaging, modelling is a valuable tool to investigate the impact of structural variations of the microvasculature on blood flow and mass transfers. In the brain microcirculation, the capillary bed contains the smallest vessels (1-10 ÎŒm in diameter) and presents a mesh-like structure embedded in the cerebral tissue. This is the main place of molecular exchange between blood and neurons. The capillary bed is fed and drained by larger arteriolar and venular tree-like vessels (10-100 ÎŒm in diameter). For the last decades, standard network approaches have significantly advanced our understanding of blood flow, mass transport and regulation mechanisms in the human brain microcirculation. By averaging flow equations over the vascular cross-sections, such approaches yield a one-dimensional model that involves much fewer variables compared to a full three-dimensional resolution of the flow. However, because of the high density of capillaries, such approaches are still computationally limited to relatively small volumes (<100 mm3). This constraint prevents applications at clinically relevant scales, since standard imaging techniques only yield much larger volumes (âŒ100 cm3), with a resolution of 1-10 mm3. To get around this computational cost, we present a hybrid approach for blood flow modelling where the capillaries are replaced by a continuous medium. This substitution makes sense since the capillary bed is dense and space-filling over a cut-off length of âŒ50 ÎŒm. In this continuum, blood flow is characterized by effective properties (e.g. permeability) at the scale of a much larger representative volume. Furthermore, the domain is discretized on a coarse grid using the finite volume method, inducing an important computational gain. The arteriolar and venular trees cannot be homogenized because of their quasi-fractal structure, thus the network approach is used to model blood flow in the larger vessels. The main difficulty of the hybrid approach is to develop a proper coupling model at the points where arteriolar or venular vessels are connected to the continuum. Indeed, high pressure gradients build up at capillary-scale in the vicinity of the coupling points, and must be properly described at the continuum-scale. Such multiscale coupling has never been discussed in the context of brain microcirculation. Taking inspiration from the Peaceman âwell modelâ developed for petroleum engineering, our coupling model relies on to use analytical solutions of the pressure field in the neighbourhood of the coupling points. The resulting equations yield a single linear system to solve for both the network part and the continuum (strong coupling). The accuracy of the hybrid model is evaluated by comparison with a classical network approach, for both very simple synthetic architectures involving no more than two couplings, and more complex ones, with anatomical arteriolar and venular trees displaying a large number of couplings. We show that the present approach is very accurate, since relative pressure errors are lower than 6 %. This lays the goundwork for introducing additional levels of complexity in the future (e.g. non uniform hematocrit). In the perspective of large-scale simulations and extension to mass transport, the hybrid approach has been implemented in a C++ code designed for High Performance Computing. It has been fully parallelized using Message Passing Interface standards and specialized libraries (e.g. PETSc). Since the present work is part of a larger project involving several collaborators, special care has been taken in developing efficient coding strategies
Changes in water and carbon in Australian vegetation in response to climate change
Australia has experienced pronounced climate change since 1950, especially in forested areas where a reducing trend in annual precipitation has occurred. However, the interaction between forests and water at multiple scales, in different geographical locations, under different management regimes and in different forest types with diverse species is not fully understood. Therefore, some interactions between forests and hydrological variables, and in particular whether the changes are mediated by management or climate, remain controversial. This thesis investigates the responses of Australiaâs terrestrial ecosystems to both historical and projected climate change using remote sensing data and ecohydrological models. The thesis is structured in seven chapters, and contains five research chapters.
Vegetation dynamics and sensitivity to precipitation change on the Australian continent for the past long drought period (2002-2010) are explored in Chapter 2 using multi-resource vegetation indices (VIs; normalized difference vegetation index (NDVI) and leaf area index (LAI)) and gridded climate data. During drought, precipitation and VIs declined across 90% and 80% of the whole continent, respectively, compared to the baseline period of 2000-2001. The most dramatic declines in VIs occurred in open shrublands near the centre of Australia and in southwestern Australia coinciding with significant reductions in precipitation and soil moisture. Overall, a strong relationship between water (precipitation and soil moisture) and VIs was detected in places where the decline in precipitation was severe. For five major vegetation types, cropland showed the highest sensitivity to water change, followed by grassland and woody savanna. Open shrublands showed moderate sensitivity to water change, while evergreen broadleaf forests only showed a slight sensitivity to soil moisture change. Although there was no consistent significant relationship between precipitation and VIs of evergreen broadleaf forests, forests in southeastern Australia, where precipitation had declined since 1997, appear to have become more sensitive to precipitation change than in southwestern Australia.
The attribution of impacts from climate change and vegetation on streamflow change at the catchment scale for southwestern Australia are described in Chapter 3. This region is characterized by intensive warming and drying since 1970. Along with these significant climate changes, dramatic declines in streamflow have occurred across the region. Here, 79 catchments were analyzed using the Mann-Kendall trend test, Pettittâs change point test, and the theoretical framework of the Budyko curve to study changes in the rainfall-runoff relationship, and effects of climate and vegetation change on streamflow. A declining trend and relatively consistent change point (2000) of streamflow were found in most catchments, with over 40 catchments showing significant declines (p < 0.05, -20% to -80%) between the two periods of 1982-2000 and 2001-2011. Most of the catchments have been shifting towards a more water-limited climate condition since 2000. Although streamflow is strongly related to precipitation for the period of 1982 to 2011, change of vegetation (land cover/use change and growth of vegetation) dominated the decrease in streamflow in about two-thirds of catchments. The contributions of precipitation, temperature and vegetation to streamflow change for each catchment varied with different catchment characters and climate conditions.
In Chapter 4, the magnitude and trend of water use efficiency (WUE) of forest ecosystems in Australia, and their response to drought from 1982 to 2014, were analyzed using a modified version of the Community Atmosphere Biosphere Land Exchange (CABLE) land surface model in the BIOS2 modelling environment. Instead of solely relying on the ratio of gross primary productivity (GPP) to evapotranspiration (ET) as WUE (GPP/ET), the ratio of net primary productivity (NPP) to Transpiration (ETr) (NPP/ETr) was also adopted to more comprehensively understand the response of vegetation to drought. For the study period, national average annual forest WUE was 1.39 ± 0.80 g C kgâ1 H2O for GPP/ET and 1.48 ± 0.28 g C kgâ1 H2O for NPP/ETr. The WUE increased in the entire study area during this period (with a rate of 0.003 g C kgâ1 H2O yr-1 for GPP/ET; p < 0.005 and a rate of 0.0035 g C kgâ1 H2O yr-1 for NPP/ETr; p < 0.01), whereas different trends were detected in different biomes. A significantly increasing trend of annual WUE was only found in woodland areas due to higher magnitudes of increases in GPP and NPP than ET and ETr. The exception was in eucalyptus open forest area where ET and ETr decreased more than reductions in GPP and NPP. The response of WUE to drought was further analyzed using 1-48 month scales standardised precipitation-evapotranspiration index (SPEI). More severe (SPEI < -1) and frequent droughts (over ca. 8 years) occurred in the north than in the southwest and southeast of Australia since 1982. The response of WUE to drought varied significantly regionally and across forest types. The response of WUE to drought varied significantly regionally and across forest types, due to the different responses of carbon sequestration and water consumption to drought. The cumulative lagged effect of drought on monthly WUE derived from NPP/ETr was consistent and relatively short and stable between biomes (< 4 months), but notably varied for WUE based on GPP/ET, with a long time lag (mean of 16 months).
As Chapters 2-4 confirmed that climate change has been playing an important role in the water yield and vegetation dynamics in Australia, the response of water yield and carbon sequestration to projected future climate change scenarios were integrated using the Water Supply Stress Index and Carbon model (WaSSI-C) ecohydrology model in Chapter 5. This model was calibrated with the latest water and carbon observations from the OzFlux network. The performance of the WaSSI-C model was assessed with measures of Q from 222 Hydrologic Reference Stations (HRSs) in Australia. Across the 222 HRSs, the WaSSI-C model generally captured the spatial variability of mean annual and monthly Q as evaluated by the Correlation Coefficient (R2 = 0.1-1.0), Nash-Sutcliffe Efficiency (NSE = -0.4-0.97), and normalized Root Mean Squared Error by Q (RMSE/Q = 0.01-2.2). Then 19 Global Climate Models (GCMs) from the Coupled Model Intercomparison Project phase 5 (CMIP5), across all Representative Concentration Pathways (RCPs) (RCP2.6, RCP4.5, RCP6.0 and RCP8.5), were used to investigate the potential impacts of climate change on water and carbon fluxes. Compared with the baseline period of 1995-2015 across the 222 HRSs, the temperature was projected to rise by an average of 0.56 to 2.49 ËC by 2080, while annual precipitation was projected to vary significantly. All RCPs demonstrated a similar spatial pattern of change of projected Q and GPP by 2080, however, the magnitude varied widely among the 19 GCMs. Overall, future climate change may result in a significant reduction in Q but may be accompanied by an increase in ecosystem productivity. Mean annual Q was projected to decrease by 5 - 211 mm yr-1 (34% - 99%) by 2080, with over 90% of the watersheds declining. On the contrary, GPP was projected to increase by 17 - 255 g C m-2 yr-1 (2% - 17%) by 2080 in comparison with 1995-2015 in southeastern Australia. A significant limitation of WaSSI-C model is that it only runs serially. High resolution simulations at the continental scale are therefore not only computationally expensive but also present a run-time memory burden.
In Chapter 6, using distributed (Message Passing Interface, MPI) and shared (Open Multi-Processing, OpenMP) memory parallelism techniques, the model was parallelized (and renamed as dWaSSI-C), and this approach was very effective in reducing the computing run-time and memory use. By using the parallelized model, several experiments were carried out to simulate water and carbon fluxes over the Australian continent to test the sensitivity of the model to input data-sets of different resolutions, as well as the sensitivity of the model to its WUE parameter for different vegetation types. These simulations were completed within minutes using dWaSSI-C, and this would not have been possible with the serial version. Results show that the model is able to simulate the seasonal cycle of GPP reasonably well when compared to observations at 4 eddy flux sites in Australia. The sensitivity analysis showed that simulated GPP was more sensitive to WUE during the Australian summer as compared to winter, and woody savannas and grasslands showed higher sensitivity than evergreen broadleaf forests and shrublands. With the parallelized dWaSSI-C model, it will now be much easier and faster to conduct continental scale analyses of the impacts of climate change and land cover change on water and carbon.
Overall, vegetation and water of Australian ecosystems have become very sensitive to climate change after a considerable decline in streamflow. Australian ecosystems, especially in temperate Australia, are projected to experience warmer and drier climate conditions with increasing drought risk. However, the prediction from different models varied significantly due to the uncertainty of each climate model. The impacts of different forest management scenarios should be studied to find the best land use pattern under the changing climate. Forest management methods, such as thinning and reforestation, may be conducted to mitigate the impacts of drought on water yield and carbon sequestration in the future
Proceedings, MSVSCC 2015
The Virginia Modeling, Analysis and Simulation Center (VMASC) of Old Dominion University hosted the 2015 Modeling, Simulation, & Visualization Student capstone Conference on April 16th. The Capstone Conference features students in Modeling and Simulation, undergraduates and graduate degree programs, and fields from many colleges and/or universities. Students present their research to an audience of fellow students, faculty, judges, and other distinguished guests. For the students, these presentations afford them the opportunity to impart their innovative research to members of the M&S community from academic, industry, and government backgrounds. Also participating in the conference are faculty and judges who have volunteered their time to impart direct support to their studentsâ research, facilitate the various conference tracks, serve as judges for each of the tracks, and provide overall assistance to this conference. 2015 marks the ninth year of the VMASC Capstone Conference for Modeling, Simulation and Visualization. This year our conference attracted a number of fine student written papers and presentations, resulting in a total of 51 research works that were presented. This yearâs conference had record attendance thanks to the support from the various different departments at Old Dominion University, other local Universities, and the United States Military Academy, at West Point. We greatly appreciated all of the work and energy that has gone into this yearâs conference, it truly was a highly collaborative effort that has resulted in a very successful symposium for the M&S community and all of those involved. Below you will find a brief summary of the best papers and best presentations with some simple statistics of the overall conference contribution. Followed by that is a table of contents that breaks down by conference track category with a copy of each included body of work. Thank you again for your time and your contribution as this conference is designed to continuously evolve and adapt to better suit the authors and M&S supporters.
Dr.Yuzhong Shen Graduate Program Director, MSVE Capstone Conference Chair
John ShullGraduate Student, MSVE Capstone Conference Student Chai
Generating and auto-tuning parallel stencil codes
In this thesis, we present a software framework, Patus, which generates high performance stencil codes for different types of hardware platforms, including current multicore CPU and graphics processing unit architectures. The ultimate goals of the framework are productivity, portability (of both the code and performance), and achieving a high performance on the target platform.
A stencil computation updates every grid point in a structured grid based on the values of its neighboring points. This class of computations occurs frequently in scientific and general purpose computing (e.g., in partial differential equation solvers or in image processing), justifying the focus on this kind of computation.
The proposed key ingredients to achieve the goals of productivity, portability, and performance are domain specific languages (DSLs) and the auto-tuning methodology.
The Patus stencil specification DSL allows the programmer to express a stencil computation in a concise way independently of hardware architecture-specific details. Thus, it increases the programmer productivity by disburdening her or him of low level programming model issues and of manually applying hardware platform-specific
code optimization techniques. The use of domain specific languages also implies code reusability: once implemented, the same stencil specification can be reused on different
hardware platforms, i.e., the specification code is portable across hardware architectures. Constructing the language to be geared towards a special purpose makes it amenable to more aggressive optimizations and therefore to potentially higher performance.
Auto-tuning provides performance and performance portability by automated adaptation of implementation-specific parameters to the characteristics of the hardware on which the code will run. By automating the process of parameter tuning â which essentially amounts to solving an integer programming problem in which the objective function is the number representing the code's performance as a function of the parameter configuration, â the system can also be used more productively than if the programmer had to fine-tune the code manually.
We show performance results for a variety of stencils, for which Patus was used to generate the corresponding implementations. The selection includes stencils taken from two real-world applications: a simulation of the temperature within the human body during hyperthermia cancer treatment and a seismic application. These examples demonstrate the framework's flexibility and ability to produce high performance code
Acceleration and Verification of Virtual High-throughput Multiconformer Docking
The work in this dissertation explores the use of massive computational power available through modern supercomputers as a virtual laboratory to aid drug discovery. As of November 2013, Tianhe-2, the fastest supercomputer in the world, has a theoretical performance peak of 54,902 TFlop/s or nearly 55 thousand trillion calculations per second. The Titan supercomputer located at Oak Ridge National Laboratory has 560,640 computing cores that can work in parallel to solve scientific problems. In order to harness this computational power to assist in drug discovery, tools are developed to aid in the preparation and analysis of high-throughput virtual docking screens, a tool to predict how and how well small molecules bind to disease associated proteins and potentially serve as a novel drug candidate. Methods and software for performing large screens are developed that run on high-performance computer systems. The future potential and benefits of using these tools to study polypharmacology and revolutionizing the pharmaceutical industry are also discussed
Recommended from our members
Modeling Cardiovascular Hemodynamics Using the Lattice Boltzmann Method on Massively Parallel Supercomputers
Accurate and reliable modeling of cardiovascular hemodynamics has the potential to improve understanding of the localization and progression of heart diseases, which are currently the most common cause of death in Western countries. However, building a detailed, realistic model of human blood flow is a formidable mathematical and computational challenge. The simulation must combine the motion of the fluid, the intricate geometry of the blood vessels, continual changes in flow and pressure driven by the heartbeat, and the behavior of suspended bodies such as red blood cells. Such simulations can provide insight into factors like endothelial shear stress that act as triggers for the complex biomechanical events that can lead to atherosclerotic pathologies. Currently, it is not possible to measure endothelial shear stress in vivo, making these simulations a crucial component to understanding and potentially predicting the progression of cardiovascular disease. In this thesis, an approach for efficiently modeling the fluid movement coupled to the cell dynamics in real-patient geometries while accounting for the additional force from the expansion and contraction of the heart will be presented and examined. First, a novel method to couple a mesoscopic lattice Boltzmann fluid model to the microscopic molecular dynamics model of cell movement is elucidated. A treatment of red blood cells as extended structures, a method to handle highly irregular geometries through topology driven graph partitioning, and an efficient molecular dynamics load balancing scheme are introduced. These result in a large-scale simulation of the cardiovascular system, with a realistic description of the complex human arterial geometry, from centimeters down to the spatial resolution of red-blood cells. The computational methods developed to enable scaling of the application to 294,912 processors are discussed, thus empowering the simulation of a full heartbeat. Second, further extensions to enable the modeling of fluids in vessels with smaller diameters and a method for introducing the deformational forces exerted on the arterial flows from the movement of the heart by borrowing concepts from cosmodynamics are presented. These additional forces have a great impact on the endothelial shear stress. Third, the fluid model is extended to not only recover Navier-Stokes hydrodynamics, but also a wider range of Knudsen numbers, which is especially important in micro- and nano-scale flows. The tradeoffs of many optimizations methods such as the use of deep halo level ghost cells that, alongside hybrid programming models, reduce the impact of such higher-order models and enable efficient modeling of extreme regimes of computational fluid dynamics are discussed. Fourth, the extension of these models to other research questions like clogging in microfluidic devices and determining the severity of co-arctation of the aorta is presented. Through this work, a validation of these methods by taking real patient data and the measured pressure value before the narrowing of the aorta and predicting the pressure drop across the co-arctation is shown. Comparison with the measured pressure drop in vivo highlights the accuracy and potential impact of such patient specific simulations. Finally, a method to enable the simulation of longer trajectories in time by discretizing both spatially and temporally is presented. In this method, a serial coarse iterator is used to initialize data at discrete time steps for a fine model that runs in parallel. This coarse solver is based on a larger time step and typically a coarser discretization in space. Iterative refinement enables the compute-intensive fine iterator to be modeled with temporal parallelization. The algorithm consists of a series of prediction-corrector iterations completing when the results have converged within a certain tolerance. Combined, these developments allow large fluid models to be simulated for longer time durations than previously possible.Engineering and Applied Science
Ein verteilter und agentenbasierter Ansatz fĂŒr gekoppelte Probleme der rechnergestĂŒtzten Ingenieurwissenschaften
Challenging questions in science and engineering often require to decouple a complex problem and to focus on isolated sub-problems first. The knowledge of those individual solutions can later be combined to obtain the result for the full question. A similar technique is applied in numerical modeling. Here, the software solver for subsets of the coupled problem might already exist and can directly be used.
This thesis describes a software environment capable of combining multiple software solvers, the result being a new, combined model.
Two important design decisions were crucial at the beginning: First, every sub-model keeps full control of its execution. Second, the source code of the sub-model requires only minimal adaptation. The sub-models choose themselves when to issue communication calls, with no outer synchronisation mechanism required.
The coupling of heterogeneous hardware is supported as well as the use of homogeneous compute clusters. Furthermore, the coupling framework allows sub-solvers to be written in different programming languages. Also, each of the sub-models may operate on its own spatial and temporal scales.
The next challenge was to allow the potential coupling of thousands software agents, being able to utilise today's petascale hardware. For this purpose, a specific coupling framework was designed and implemented, combining the experiences from the previous work with additions required to cope with the targeted number of coupled sub-models.
The large number of interacting models required a much more dynamic approach, where the agents automatically detect their communication partners at runtime. This eliminates the need to explicitly specify the coupling graph a~priori. Agents are allowed to enter (and leave) the simulation at any time, with the coupling graph changing accordingly.Da viele Problemstellungen im Ingenieurwesen sehr komplex sind, ist es oft sinnvoll, sie in einzelne Teilprobleme aufzugliedern. Diese Teilbereiche können nun einzeln angegangen und dann zur Gesamtlösung kombiniert werden. Ein Ă€hnlicher Ansatz wird bei der numerischen Modellierung verfolgt: Komplexe Software wird schrittweise erstellt, indem Software-Löser fĂŒr einzelne Bereiche zuerst separat erarbeitet werden.
In dieser Arbeit wird eine Software beschrieben, die eine Vielzahl von unabhÀngigen Software-Lösern kombinieren kann.
Jedes Teilmodell verhĂ€lt sich weiterhin wie ein selbstĂ€ndiges Programm. HierfĂŒr wird es in einen Software-Agenten gehĂŒllt. Zur Kopplung sind lediglich minimale ErgĂ€nzungen am Quellcode des Teilmodells nötig. Möglich wird dies durch die Struktur der Kommunikation zwischen den Teilmodellen. Sie lĂ€sst den Modellen die Kontrolle ĂŒber die Kommunikationsaufrufe und benötigt zur Synchronisation keine Einflussnahme einer ĂŒbergeordneten Instanz.
Manche Teilmodelle sind fĂŒr den Gebrauch mit einer speziellen Hardware optimiert. Daher musste das Zusammenspiel unterschiedlicher Hardware ebenso berĂŒcksichtigt werden wie homogene Rechencluster. Weiterhin ermöglicht das Kopplungs-Framework, dass unterschiedliche Programmiersprachen verbunden werden können. Wie schon der Programmablauf, so können auch die Modellparameter, etwa die rĂ€umliche und zeitliche Skala, von Teilmodell zu Teilmodell unterschiedlich bleiben.
Weiter behandelt diese Arbeit eine Vorgehensweise um tausende von Software-Agenten zu einem GroĂ-Modell zu koppeln. Dies ist erforderlich, wenn die Ressourcen heutiger Petascale Rechencluster benutzt werden sollen. Hierzu wurde das bisherige Framework neu aufgelegt, da die groĂe Anzahl von zu koppelnden Modellen einer wesentlich dynamischeren Kommunikationsstruktur bedarf. Die Agenten der Teilmodelle können einer laufenden Simulation hinzugefĂŒgt werden (oder diese verlassen) und die globalen Kopplungsbeziehungen passen sich dementsprechend an
- âŠ