7 research outputs found
Recommended from our members
Modeling Cardiovascular Hemodynamics Using the Lattice Boltzmann Method on Massively Parallel Supercomputers
Accurate and reliable modeling of cardiovascular hemodynamics has the potential to improve understanding of the localization and progression of heart diseases, which are currently the most common cause of death in Western countries. However, building a detailed, realistic model of human blood flow is a formidable mathematical and computational challenge. The simulation must combine the motion of the fluid, the intricate geometry of the blood vessels, continual changes in flow and pressure driven by the heartbeat, and the behavior of suspended bodies such as red blood cells. Such simulations can provide insight into factors like endothelial shear stress that act as triggers for the complex biomechanical events that can lead to atherosclerotic pathologies. Currently, it is not possible to measure endothelial shear stress in vivo, making these simulations a crucial component to understanding and potentially predicting the progression of cardiovascular disease. In this thesis, an approach for efficiently modeling the fluid movement coupled to the cell dynamics in real-patient geometries while accounting for the additional force from the expansion and contraction of the heart will be presented and examined. First, a novel method to couple a mesoscopic lattice Boltzmann fluid model to the microscopic molecular dynamics model of cell movement is elucidated. A treatment of red blood cells as extended structures, a method to handle highly irregular geometries through topology driven graph partitioning, and an efficient molecular dynamics load balancing scheme are introduced. These result in a large-scale simulation of the cardiovascular system, with a realistic description of the complex human arterial geometry, from centimeters down to the spatial resolution of red-blood cells. The computational methods developed to enable scaling of the application to 294,912 processors are discussed, thus empowering the simulation of a full heartbeat. Second, further extensions to enable the modeling of fluids in vessels with smaller diameters and a method for introducing the deformational forces exerted on the arterial flows from the movement of the heart by borrowing concepts from cosmodynamics are presented. These additional forces have a great impact on the endothelial shear stress. Third, the fluid model is extended to not only recover Navier-Stokes hydrodynamics, but also a wider range of Knudsen numbers, which is especially important in micro- and nano-scale flows. The tradeoffs of many optimizations methods such as the use of deep halo level ghost cells that, alongside hybrid programming models, reduce the impact of such higher-order models and enable efficient modeling of extreme regimes of computational fluid dynamics are discussed. Fourth, the extension of these models to other research questions like clogging in microfluidic devices and determining the severity of co-arctation of the aorta is presented. Through this work, a validation of these methods by taking real patient data and the measured pressure value before the narrowing of the aorta and predicting the pressure drop across the co-arctation is shown. Comparison with the measured pressure drop in vivo highlights the accuracy and potential impact of such patient specific simulations. Finally, a method to enable the simulation of longer trajectories in time by discretizing both spatially and temporally is presented. In this method, a serial coarse iterator is used to initialize data at discrete time steps for a fine model that runs in parallel. This coarse solver is based on a larger time step and typically a coarser discretization in space. Iterative refinement enables the compute-intensive fine iterator to be modeled with temporal parallelization. The algorithm consists of a series of prediction-corrector iterations completing when the results have converged within a certain tolerance. Combined, these developments allow large fluid models to be simulated for longer time durations than previously possible.Engineering and Applied Science
Compiler-driven data layout transformations for network applications
This work approaches the little studied topic of compiler optimisations directed to
network applications.
It starts by investigating if there exist any fundamental differences between application
domains that justify the development and tuning of domain-specific compiler optimisations.
It shows an automated approach that is capable of identifying domain-specific
workload characterisations and presenting them in a readily interpretable format based
on decision trees. The generated workload profiles summarise key resource utilisation
issues and enable compiler engineers to address the highlighted bottlenecks.
By applying this methodology to data intensive network infrastructure application it
shows that data organisation is the key obstacle to overcome in order to achieve high
performance.
It therefore proposes and evaluates three specialised data transformations (structure
splitting, array regrouping, and software caching) against the industrial EEMBC networking
benchmarks and real-world data sets. It also demonstrates on one hand that
speedups of up to 2.62 can be achieved, but on the other that no single solution performs
equally well across different network traffic scenarios.
Hence, to address this issue, an adaptive software caching scheme for high frequency
route lookup operations is introduced and its effectiveness evaluated one more time
against EEMBC networking benchmarks and real-world data sets achieving speedups
of up to 3.30 and 2.27. The results clearly demonstrate that adaptive data organisation
schemes are necessary to ensure optimal performance under varying network loads.
Finally this research addresses another issue introduced by data transformations such
as array regrouping and software caching, i.e. the need for static analysis to allow
efficient resource allocation. This thesis proposes a static code analyser that allows the
automatic resource analysis of source code containing lists and tree structures. The tool
applies a combination of amortised analysis and separation logic methodology to real
code and is able to evaluate type and resource usage of existing data structures, which
can be used to compute global resource consumption values for full data intensive
network applications
An Investigation of Search Behaviour in Search-Based Unit Test Generation
As software testing is a laborious and error-prone task, automation is desirable. Search-based unit test generation applies evolutionary search algorithms to generate software tests and, in the context of unit testing object-oriented software, Genetic Algorithms (GAs) are frequently employed to generate unit tests that maximise code coverage.
Although GAs are effective at generating tests that achieve high code coverage, they are still far from being able to satisfy all test goals (e.g., covering all branches). While some general limitations are known, there is still a lack of understanding of the search behaviour during the optimization, making it difficult to identify the factors that make a search problem difficult.
Therefore, this thesis aims to investigate the search behaviour when GAs are applied to generate object-oriented unit tests and, more specifically, identify the reasons why the search fails to achieve the desired test goals. This is achieved by investigating (1) the fitness landscape structure and the impact of its features on the generation of unit tests and (2) the influence of population diversity on generating potential unit tests. Based on the outcome of this investigation, the impact of test case reduction on the landscape features and population diversity is also investigated.
Our results reveal that classical indicators for rugged fitness landscapes suggest well searchable problems in the case of unit test generation, but the fitness landscape for most problem instances is dominated by detrimental plateaus. However, increasing diversity does not have a beneficial effect on coverage in general, but it may improve coverage when diversity is promoted adaptively. In fact, increasing diversity has a negative impact on the individual length, which can also be mitigated with the adaptive diversity. Applying the test case reduction seems to be promising in improving the landscape structure and reducing the negative side effects of diversity on length, but have no considerable impact on the search performance