164 research outputs found

    Fluid-structure interaction modelling of a patient-specific arteriovenous access fistula

    Get PDF
    This research forms part of an interdisciplinary project that aims to improve the detailed understanding of the haemodynamics and vascular mechanics in arteriovenous shunts that are required for haemodialysis treatments. A combination of new PCMRA imaging and computational modelling of in vivo blood flow aims to determine the haemodynamic conditions that may lead to the high failure rate of vascular access in these circumstances. This thesis focuses on developing a patient-specific fluid-structure interaction (FSI) model of a PC-MRA imaged arteriovenous fistula. The numerical FSI model is developed and simulated within the commercial multiphysics simulation package ANSYS¼ Academic Research, Release 16. The blood flow is modelled as a Newtonian fluid with the finite-volume method solver ANSYS¼ Fluent¼. A pulsatile mass-flow boundary condition is applied at the artery inlet and a three-element Windkessel model at the artery and vein outlets. ANSYS¼ Mechanicalℱ, a finite element method solver, is used to model the nonlinear behaviour of the vessel walls. The artery and vein walls are assumed to follow a third-order Yeoh model, and are differentiated by thickness and by material strength characteristics. The staggered FSI model is configured and executed in ANSYS¼ Workbenchℱ, forming a semi-implicit coupling of the blood flow and vessel wall models. This work shows the effectiveness of combining a number of stabilisation techniques to simultaneously overcome the added-mass effect and optimise the efficiency of the overall model. The PC-MRA data, fluid model, and FSI model show almost identical flow features in the fistula; this applies in particular to a flow recirculation region in the vein that could potentially lead to fistula failure

    Techniques for the reverse engineering of banking malware

    Get PDF
    Malware attacks are a signiïŹcant and frequently reported problem, adversely aïŹ€ecting the productivity of organisations and governments worldwide. The well-documented consequences of malware attacks include ïŹnancial loss, data loss, reputation damage, infrastructure damage, theft of intellectual property, compromise of commercial negotiations, and national security risks. Mitiga-tion activities involve a signiïŹcant amount of manual analysis. Therefore, there is a need for automated techniques for malware analysis to identify malicious behaviours. Research into automated techniques for malware analysis covers a wide range of activities. This thesis consists of a series of studies: an anal-ysis of banking malware families and their common behaviours, an emulated command and control environment for dynamic malware analysis, a technique to identify similar malware functions, and a technique for the detection of ransomware. An analysis of the nature of banking malware, its major malware families, behaviours, variants, and inter-relationships are provided in this thesis. In doing this, this research takes a broad view of malware analysis, starting with the implementation of the malicious behaviours through to detailed analysis using machine learning. The broad approach taken in this thesis diïŹ€ers from some other studies that approach malware research in a more abstract sense. A disadvantage of approaching malware research without domain knowledge, is that important methodology questions may not be considered. Large datasets of historical malware samples are available for countermea-sures research. However, due to the age of these samples, the original malware infrastructure is no longer available, often restricting malware operations to initialisation functions only. To address this absence, an emulated command and control environment is provided. This emulated environment provides full control of the malware, enabling the capabilities of the original in-the-wild operation, while enabling feature extraction for research purposes. A major focus of this thesis has been the development of a machine learn-ing function similarity method with a novel feature encoding that increases feature strength. This research develops techniques to demonstrate that the machine learning model trained on similarity features from one program can ïŹnd similar functions in another, unrelated program. This ïŹnding can lead to the development of generic similar function classiïŹers that can be packaged and distributed in reverse engineering tools such as IDA Pro and Ghidra. Further, this research examines the use of API call features for the identi-ïŹcation of ransomware and shows that a failure to consider malware analysis domain knowledge can lead to weaknesses in experimental design. In this case, we show that existing research has diïŹƒculty in discriminating between ransomware and benign cryptographic software. This thesis by publication, has developed techniques to advance the disci-pline of malware reverse engineering, in order to minimize harm due to cyber-attacks on critical infrastructure, government institutions, and industry.Doctor of Philosoph

    An Efficient Execution Model for Reactive Stream Programs

    Get PDF
    Stream programming is a paradigm where a program is structured by a set of computational nodes connected by streams. Focusing on data moving between computational nodes via streams, this programming model fits well for applications that process long sequences of data. We call such applications reactive stream programs (RSPs) to distinguish them from stream programs with rather small and finite input data. In stream programming, concurrency is expressed implicitly via communication streams. This helps to reduce the complexity of parallel programming. For this reason, stream programming has gained popularity as a programming model for parallel platforms. However, it is also challenging to analyse and improve the performance without an understanding of the program's internal behaviour. This thesis targets an effi cient execution model for deploying RSPs on parallel platforms. This execution model includes a monitoring framework to understand the internal behaviour of RSPs, scheduling strategies for RSPs on uniform shared-memory platforms; and mapping techniques for deploying RSPs on heterogeneous distributed platforms. The foundation of the execution model is based on a study of the performance of RSPs in terms of throughput and latency. This study includes quantitative formulae for throughput and latency; and the identification of factors that influence these performance metrics. Based on the study of RSP performance, this thesis exploits characteristics of RSPs to derive effective scheduling strategies on uniform shared-memory platforms. Aiming to optimise both throughput and latency, these scheduling strategies are implemented in two heuristic-based schedulers. Both of them are designed to be centralised to provide load balancing for RSPs with dynamic behaviour as well as dynamic structures. The first one uses the notion of positive and negative data demands on each stream to determine the scheduling priorities. This scheduler is independent from the runtime system. The second one requires the runtime system to provide the position information for each computational node in the RSP; and uses that to decide the scheduling priorities. Our experiments show that both schedulers provides similar performance while being significantly better than a reference implementation without dynamic load balancing. Also based on the study of RSP performance, we present in this thesis two new heuristic partitioning algorithms which are used to map RSPs onto heterogeneous distributed platforms. These are Kernighan-Lin Adaptation (KLA) and Congestion Avoidance (CA), where the main objective is to optimise the throughput. This is a multi-parameter optimisation problem where existing graph partitioning algorithms are not applicable. Compared to the generic meta-heuristic Simulated Annealing algorithm, both proposed algorithms achieve equally good or better results. KLA is faster for small benchmarks while slower for large ones. In contrast, CA is always orders of magnitudes faster even for very large benchmarks

    Service Replication in Wireless Mobile Ad Hoc Networks

    Get PDF
    Die vorliegende Arbeit beschĂ€ftigt sich mit dem Management von Diensten im mobilen ad-hoc Netzwerken (MANETs). MANETs sind drahtlose NetzverbĂ€nde mobiler Einheiten die sich dezentral ohne eine ĂŒbergeordnete Organisation selbst verwalten. Die Netztopologie eines MANET verĂ€ndert sich dabei dynamisch mit der Bewegung der autonomen Teilnehmer. Sensor Netzwerke, Personal Area Networks und Satelliten Netzwerke sind typische Beispiele fĂŒr derartige MANETs. Mit der wachsenden Bedeutung der drahtlosen Vernetzung mobiler GerĂ€te haben sich MANETs in den vergangenen Jahren zu einem wichtigen Forschungsgebiet entwickelt. Im Katastrophenmanagement, bei zivilen RettungsfĂ€llen oder in militĂ€rischen Szenarien kann ihre infrastrukturlose Selbstorganisation MANETs zum einzig möglichen Kommunikationsmittel machen. Die mobilen Knoten eines MANETs kooperieren um essenzielle Netzwerkdienste wie das Routing und den Datentransport gemeinschaftlich zu gewĂ€hrleisten. Ressourcen wie die Bandbreite zwischen Knoten, die Rechenleistung der mobilen GerĂ€te und ihre Batterieleistung sind dabei typischerweise stark begrenzt und zudem wechselnd. Das Teilen der verfĂŒgbaren Ressourcen ist daher eine Notwendigkeit fĂŒr das effiziente Funktionieren eines MANETs. Dienstorientierte Architekturen (SOAs) stellen ein geeignetes Paradigma dar, um geteilte Ressourcen zu verwalten. Wenn verfĂŒgbare Ressourcen als Dienst aufgefasst werden, lĂ€sst sich ihre Nutzung als Dienstabfrage bearbeiten. In diesem Zusammenhang ermöglichen SOAs Abstraktion, Kapselung, lose Koppelung, Auffindbarkeit von Ressourcen und dir fĂŒr MANETs essenzielle Autonomie. Die Anwendung von SOAs auf MANETs findet daher zunehmend Beachtung in der Forschung

    Structural failure and fracture of immature bone

    Get PDF
    Radiological features alone do not allow the discrimination between accidental paediatric long bone fractures or those caused by child abuse. Therefore, for those cases where the child is unable to communicate coherently, there is a clinical need to elucidate the mechanisms behind each fracture to provide a forensic biomechanical tool for clinical implementation. 5 months old ovine femurs and tibiae were used as surrogates for paediatric specimens and were subjected to micro-CT scans to obtain their geometrical and material properties. A novel methodology to align long bones so that they would be loaded in a state of pure bending and torsion was developed and compared against the use of a standard anatomical coordinate system. The second moment of area and its coefficient of variation (COV) for each alignment method were calculated to ascertain the reference axes that minimised the effect of eccentric loading. Wilcoxon-signed rank test showed a significant reduction in COV of the second moment of area using this new method, indicating that the bone has a more regular cross-section when this methodology is implemented. The algorithm generated the locations of subject-specific landmarks that can be used as a reference to align the bones in experimental testing. A low-cost platform that synchronized the data acquisition from the tensile testing machine and the strain gauges was built and used with a high speed camera to capture the fracture pattern in four-point bending at three strain rates and in torsion at two different strain rates, following commonly reported case histories. Finite element (FE) models of ovine tibiae in their optimised alignment were generated to replicate the fracture patterns that were obtained. Fracture initiation and propagation was simulated through the use of element deletion with a maximum principal strain criterion. The experiments produced transverse, oblique, and spiral fractures consistently, which were correlated with the finite element analysis, demonstrating the ability of this pipeline to now be adapted for use in forensic analysis.Open Acces

    Haemodynamics analysis of carotid artery stenosis and carotid artery stenting

    Get PDF
    Carotid stenosis is a local narrowing of the carotid artery, and is usually found in the internal carotid artery. The presence of a high-degree stenosis in a carotid artery may provoke transition from laminar to turbulent flow during part of the cardiac cycle. Turbulence in blood flow can influence haemodynamic parameters such as velocity profiles, shear stress and pressure, which are important in wall remodelling. Patients with severe stenosis could be treated with a minimally invasive clinical procedure, carotid artery stenting (CAS). Although CAS has been widely adopted in clinical practice, the complication of in-stent restenosis (ISR) has been reported after CAS. The incidence of ISR is influenced by stent characteristics and vessel geometry, and correlates strongly with regions of neointimal hyperplasia (NH). Therefore, the main purpose of this study is to provide more insights into the haemodynamics in stenosed carotid artery and in post-CAS geometries via computational simulation. The first part of the thesis presents a computational study on flow features in a stenotic carotid artery bifurcation using two computational approaches, large eddy simulation (LES) and Reynolds-averaged Navier-Stokes (RANS) incorporating the Shear Stress Transport model with the γ-ReΞ transition (SST-Tran) models. The computed flow patterns are compared with those measured with particle image velocimetry (PIV). The results show that both SST-Tran and LES can predict the PIV results reasonably well, but LES is more accurate especially at locations distal to the stenosis where flow is highly disturbed. The second part of the thesis is to determine how stent strut design may influence the development of ISR at the carotid artery bifurcation following CAS. Key parameters that can be indicative of ISR are obtained for different stent designs and compared; these include low and oscillating wall shear stress (WSS), high residence time, and wall stress. A computationally efficient methodology is employed to reproduce stent strut geometry. This method facilitates the accurate reconstruction of actual stent geometry and details of strut configuration and its inclusion in the fluid domain. Computational simulations for flow patterns and low-density lipoprotein (LDL) transport are carried out in order to investigate spatial and temporal variations of WSS and LDL accumulation in the stented carotid geometries. Furthermore, finite element (FE) analysis is performed to evaluate the wall stress distribution with different stent designs. The results reveal that the closed-cell stent design is more likely to create atheroprone and procoagulant flow conditions, causing larger area to be exposed to low wall shear stress (WSS), elevated oscillatory shear index, as well as to induce higher wall stress compared to the open-cell stent design. This study also demonstrates the suitability of SST-Tran and LES models in capturing the presence of complex flow patterns in post-stenotic region.Open Acces

    Bridging spatiotemporal scales in biomechanical models for living tissues : from the contracting Esophagus to cardiac growth

    Get PDF
    Appropriate functioning of our body is determined by the mechanical behavior of our organs. An improved understanding of the biomechanical functioning of the soft tissues making up these organs is therefore crucial for the choice for, and development of, efficient clinical treatment strategies focused on patient-specific pathophysiology. This doctoral dissertation describes the passive and active biomechanical behavior of gastrointestinal and cardiovascular tissue, both in the short and long term, through computer models that bridge the cell, tissue and organ scale. Using histological characterization, mechanical testing and medical imaging techniques, virtual esophagus and heart models are developed that simulate the patient-specific biomechanical organ behavior as accurately as possible. In addition to the diagnostic value of these models, the developed modeling technology also allows us to predict the acute and chronic effect of various treatment techniques, through e.g. drugs, surgery and/or medical equipment. Consequently, this dissertation offers insights that will have an unmistakable impact on the personalized medicine of the future.Het correct functioneren van ons lichaam wordt bepaald door het mechanisch gedrag van onze organen. Een verbeterd inzicht in het biomechanisch functioneren van deze zachte weefsels is daarom van cruciale waarde voor de keuze voor, en ontwikkeling van, efficiënte klinische behandelingsstrategieën gefocust op de patiënt-specifieke pathofysiologie. Deze doctoraatsthesis brengt het passieve en actieve biomechanisch gedrag van gastro-intestinaal en cardiovasculair weefsel, zowel op korte als lange termijn, in kaart via computermodellen die een brug vormen tussen cel-, weefsel- en orgaanniveau. Aan de hand van histologische karakterisering, mechanische testen en medische beeldvormingstechnieken worden virtuele slokdarm- en hartmodellen ontwikkeld die het patiënt-specifieke orgaangedrag zo accuraat mogelijk simuleren. Naast de diagnostische waarde van deze modellen, laat de ontwikkelde modelleringstechnologie ook toe om het effect van verschillende behandelingstechnieken, via medicatie, chirurgie en/of medische apparatuur bijvoorbeeld, acuut en chronisch te voorspellen. Bijgevolg biedt deze doctoraatsthesis inzichten die een onmiskenbare impact zullen hebben op de gepersonaliseerde geneeskunde van de toekomst

    Security-Driven Software Evolution Using A Model Driven Approach

    Get PDF
    High security level must be guaranteed in applications in order to mitigate risks during the deployment of information systems in open network environments. However, a significant number of legacy systems remain in use which poses security risks to the enterprise’ assets due to the poor technologies used and lack of security concerns when they were in design. Software reengineering is a way out to improve their security levels in a systematic way. Model driven is an approach in which model as defined by its type directs the execution of the process. The aim of this research is to explore how model driven approach can facilitate the software reengineering driven by security demand. The research in this thesis involves the following three phases. Firstly, legacy system understanding is performed using reverse engineering techniques. Task of this phase is to reverse engineer legacy system into UML models, partition the legacy system into subsystems with the help of model slicing technique and detect existing security mechanisms to determine whether or not the provided security in the legacy system satisfies the user’s security objectives. Secondly, security requirements are elicited using risk analysis method. It is the process of analysing key aspects of the legacy systems in terms of security. A new risk assessment method, taking consideration of asset, threat and vulnerability, is proposed and used to elicit the security requirements which will generate the detailed security requirements in the specific format to direct the subsequent security enhancement. Finally, security enhancement for the system is performed using the proposed ontology based security pattern approach. It is the stage that security patterns derived from security expertise and fulfilling the elicited security requirements are selected and integrated in the legacy system models with the help of the proposed security ontology. The proposed approach is evaluated by the selected case study. Based on the analysis, conclusions are drawn and future research is discussed at the end of this thesis. The results show this thesis contributes an effective, reusable and suitable evolution approach for software security
    • 

    corecore