232 research outputs found

    Increasing the scalability and the speedup of a fish school distributed simulator

    Get PDF
    El present treball fa un anàlisi i desenvolupament sobre les millores en la velocitat i en l'escalabilitat d'un simulador distribuït de grups de peixos. Aquests resultats s'han obtingut fent servir una nova estratègia de comunicació per als processos lògics (LPs) i canvis en l'algoritme de selecció de veïns que s'aplica a cadascun dels peixos en cada pas de simulació. L'idea proposada permet que cada procés lògic anticipi futures necessitats de dades pels seus veïns reduint el temps de comunicació al limitar la quantitat de missatges intercanviats entre els LPs. El nou algoritme de selecció dels veïns es va desenvolupar amb l'objectiu d'evitar treball innecessari permetent la disminució de les instruccions executades en cada pas de simulació i per cadascun del peixos simulats reduint de forma significativa el temps de simulació.In this work we presented improvements in the speedup and scalability of a distributed fish school simulator. These results were achieved using a new communication strategy for logical processes (LPs) and changing the algorithm of neighbors selection that is applied to each fish in each simulation step. In the proposed approach each sender processes anticipates future data needs by its neighborhoods. That strategy reduces communication time by limiting the quantity of messages interchanged among LPs. The new neighbors selection algorithm was developed with the aim of avoiding unnecessary work. Diminishing the instructions executed by each fish been simulated in each simulation step helped to reduce a lot the simulation time

    Swarming Reconnaissance Using Unmanned Aerial Vehicles in a Parallel Discrete Event Simulation

    Get PDF
    Current military affairs indicate that future military warfare requires safer, more accurate, and more fault-tolerant weapons systems. Unmanned Aerial Vehicles (UAV) are one answer to this military requirement. Technology in the UAV arena is moving toward smaller and more capable systems and is becoming available at a fraction of the cost. Exploiting the advances in these miniaturized flying vehicles is the aim of this research. How are the UAVs employed for the future military? The concept of operations for a micro-UAV system is adopted from nature from the appearance of flocking birds, movement of a school of fish, and swarming bees among others. All of these natural phenomena have a common thread: a global action resulting from many small individual actions. This emergent behavior is the aggregate result of many simple interactions occurring within the flock, school, or swarm. In a similar manner, a more robust weapon system uses emergent behavior resulting in no weakest link because the system itself is made up of simple interactions by hundreds or thousands of homogeneous UAVs. The global system in this research is referred to as a swarm. Losing one or a few individual unmanned vehicles would not dramatically impact the swarms ability to complete the mission or cause harm to any human operator. Swarming reconnaissance is the emergent behavior of swarms to perform a reconnaissance operation. An in-depth look at the design of a reconnaissance swarming mission is studied. A taxonomy of passive reconnaissance applications is developed to address feasibility. Evaluation of algorithms for swarm movement, communication, sensor input/analysis, targeting, and network topology result in priorities of each model\u27s desired features. After a thorough selection process of available implementations, a subset of those models are integrated and built upon resulting in a simulation that explores the innovations of swarming UAVs

    Architecture aware parallel programming in Glasgow parallel Haskell (GPH)

    Get PDF
    General purpose computing architectures are evolving quickly to become manycore and hierarchical: i.e. a core can communicate more quickly locally than globally. To be effective on such architectures, programming models must be aware of the communications hierarchy. This thesis investigates a programming model that aims to share the responsibility of task placement, load balance, thread creation, and synchronisation between the application developer and the runtime system. The main contribution of this thesis is the development of four new architectureaware constructs for Glasgow parallel Haskell that exploit information about task size and aim to reduce communication for small tasks, preserve data locality, or to distribute large units of work. We define a semantics for the constructs that specifies the sets of PEs that each construct identifies, and we check four properties of the semantics using QuickCheck. We report a preliminary investigation of architecture aware programming models that abstract over the new constructs. In particular, we propose architecture aware evaluation strategies and skeletons. We investigate three common paradigms, such as data parallelism, divide-and-conquer and nested parallelism, on hierarchical architectures with up to 224 cores. The results show that the architecture-aware programming model consistently delivers better speedup and scalability than existing constructs, together with a dramatic reduction in the execution time variability. We present a comparison of functional multicore technologies and it reports some of the first ever multicore results for the Feedback Directed Implicit Parallelism (FDIP) and the semi-explicit parallelism (GpH and Eden) languages. The comparison reflects the growing maturity of the field by systematically evaluating four parallel Haskell implementations on a common multicore architecture. The comparison contrasts the programming effort each language requires with the parallel performance delivered. We investigate the minimum thread granularity required to achieve satisfactory performance for three implementations parallel functional language on a multicore platform. The results show that GHC-GUM requires a larger thread granularity than Eden and GHC-SMP. The thread granularity rises as the number of cores rises

    Dynamic load balancing of parallel road traffic simulation

    Get PDF
    The objective of this research was to investigate, develop and evaluate dynamic load-balancing strategies for parallel execution of microscopic road traffic simulations. Urban road traffic simulation presents irregular, and dynamically varying distributed computational load for a parallel processor system. The dynamic nature of road traffic simulation systems lead to uneven load distribution during simulation, even for a system that starts off with even load distributions. Load balancing is a potential way of achieving improved performance by reallocating work from highly loaded processors to lightly loaded processors leading to a reduction in the overall computational time. In dynamic load balancing, workloads are adjusted continually or periodically throughout the computation. In this thesis load balancing strategies were evaluated and some load balancing policies developed. A load index and a profitability determination algorithms were developed. These were used to enhance two load balancing algorithms. One of the algorithms exhibits local communications and distributed load evaluation between the neighbour partitions (diffusion algorithm) and the other algorithm exhibits both local and global communications while the decision making is centralized (MaS algorithm). The enhanced algorithms were implemented and synthesized with a research parallel traffic simulation. The performance of the research parallel traffic simulator, optimized with the two modified dynamic load balancing strategies were studied

    A Genetic Algorithm for UAV Routing Integrated with a Parallel Swarm Simulation

    Get PDF
    This research investigation addresses the problem of routing and simulating swarms of UAVs. Sorties are modeled as instantiations of the NP-Complete Vehicle Routing Problem, and this work uses genetic algorithms (GAs) to provide a fast and robust algorithm for a priori and dynamic routing applications. Swarms of UAVs are modeled based on extensions of Reynolds\u27 swarm research and are simulated on a Beowulf cluster as a parallel computing application using the Synchronous Environment for Emulation and Discrete Event Simulation (SPEEDES). In a test suite, standard measures such as benchmark problems, best published results, and parallel metrics are used as performance measures. The GA consistently provides efficient and effective results for a variety of VRP benchmarks. Analysis of the solution quality over time verifies that the GA exponentially improves solution quality and is robust to changing search landscapes - making it an ideal tool for employment in UAV routing applications. Parallel computing metrics calculated from the results of a PDES show that consistent speedup (almost linear in many cases) can be obtained using SPEEDES as the communication library for this UAV routing application. Results from the routing application and parallel simulation are synthesized to produce a more advanced model for routing UAVs

    Scalable and Accurate Memory System Simulation

    Get PDF
    Memory systems today possess more complexity than ever. On one hand, main memory technology has a much more diverse portfolio. Other than the mainstream DDR DRAMs, a variety of DRAM protocols have been proliferating in certain domains. Non-Volatile Memory(NVM) also finally has commodity main memory products, introducing more heterogeneity to the main memory media. On the other hand, the scale of computer systems, from personal computers, server computers, to high performance computing systems, has been growing in response to increasing computing demand. Memory systems have to be able to keep scaling to avoid bottlenecking the whole system. However, current memory simulation works cannot accurately or efficiently model these developments, making it hard for researchers and developers to evaluate or to optimize designs for memory systems. In this study, we attack these issues from multiple angles. First, we develop a fast and validated cycle accurate main memory simulator that can accurately model almost all existing DRAM protocols and some NVM protocols, and it can be easily extended to support upcoming protocols as well. We showcase this simulator by conducting a thorough characterization over existing DRAM protocols and provide insights on memory system designs. Secondly, to efficiently simulate the increasingly paralleled memory systems, we propose a lax synchronization model that allows efficient parallel DRAM simulation. We build the first ever practical parallel DRAM simulator that can speedup the simulation by up to a factor of three with single digit percentage loss in accuracy comparing to cycle accurate simulations. We also developed mitigation schemes to further improve the accuracy with no additional performance cost. Moreover, we discuss the limitation of cycle accurate models, and explore the possibility of alternative modeling of DRAM. We propose a novel approach that converts DRAM timing simulation into a classification problem. By doing so we can make predictions on DRAM latency for each memory request upon first sight, which makes it compatible for scalable architecture simulation frameworks. We developed prototypes based on various machine learning models and they demonstrate excellent performance and accuracy results that makes them a promising alternative to cycle accurate models. Finally, for large scale memory systems where data movement is often the performance limiting factor, we propose a set of interconnect topologies and implement them in a parallel discrete event simulation framework. We evaluate the proposed topologies through simulation and prove that their scalability and performance exceeds existing topologies with increasing system size or workloads

    Modelling of the downstream processing of a recombinant intracellular enzyme

    Get PDF
    This thesis examines the generation and verification of robust and predictive models for key unit operations in the downstream processing of a recombinant intracellular enzyme. Computer models are becoming increasingly used in the biotechnology industry to assess process feasibility, ensure appropriate specification and achieve efficient operation of bioprocesses. With increasing pressure to bring bioproducts to market in the shortest possible time and at low cost, it is essential that these models can be generated early in the development cycle with a minimum of time and expense. Regulatory concerns over the use of recombinant systems also require a highly consistent product and a large degree of confidence in any final design. Models must therefore be verified against real process data in order to demonstrate and improve their validity. The specific operations under study were cell disruption by high pressure homogenisation, enzyme fractionation and concentration by ammonium sulphate precipitation, and cell harvesting, debris removal and precipitate separation by disc stack centrifugation. Existing models based on the recovery of alcohol dehydrogenase (ADH) from natural bakers' yeast were applied to the processing of a protein engineered form of the enzyme (pe-ADH) and their validity assessed by comparison with experimental data obtained from small amounts of fermented recombinant material. Differences were observed in the processing of the natural and protein engineered enzyme and, although parameter re-evaluation was generally required to maintain model accuracy, the models were found to be generic in nature. Individual unit models were then arranged and linked in a modular fashion to produce a whole process model. Verification of the process model based on recombinant system parameters against pilot-scale data showed sufficient accuracy for use in feasibility studies (±30%). The inability to describe precipitant breakage in the disc stack centrifuge feed-zone was the major source of model inaccuracy. The work highlighted the utility of scale-down systems as an aid to rapid and cost- effective model development. The importance of experimental verification in order to provide confidence in model accuracy and scalability was stressed, as was the need for modelling to be an integrated activity during bioprocess development

    Reliable massively parallel symbolic computing : fault tolerance for a distributed Haskell

    Get PDF
    As the number of cores in manycore systems grows exponentially, the number of failures is also predicted to grow exponentially. Hence massively parallel computations must be able to tolerate faults. Moreover new approaches to language design and system architecture are needed to address the resilience of massively parallel heterogeneous architectures. Symbolic computation has underpinned key advances in Mathematics and Computer Science, for example in number theory, cryptography, and coding theory. Computer algebra software systems facilitate symbolic mathematics. Developing these at scale has its own distinctive set of challenges, as symbolic algorithms tend to employ complex irregular data and control structures. SymGridParII is a middleware for parallel symbolic computing on massively parallel High Performance Computing platforms. A key element of SymGridParII is a domain specific language (DSL) called Haskell Distributed Parallel Haskell (HdpH). It is explicitly designed for scalable distributed-memory parallelism, and employs work stealing to load balance dynamically generated irregular task sizes. To investigate providing scalable fault tolerant symbolic computation we design, implement and evaluate a reliable version of HdpH, HdpH-RS. Its reliable scheduler detects and handles faults, using task replication as a key recovery strategy. The scheduler supports load balancing with a fault tolerant work stealing protocol. The reliable scheduler is invoked with two fault tolerance primitives for implicit and explicit work placement, and 10 fault tolerant parallel skeletons that encapsulate common parallel programming patterns. The user is oblivious to many failures, they are instead handled by the scheduler. An operational semantics describes small-step reductions on states. A simple abstract machine for scheduling transitions and task evaluation is presented. It defines the semantics of supervised futures, and the transition rules for recovering tasks in the presence of failure. The transition rules are demonstrated with a fault-free execution, and three executions that recover from faults. The fault tolerant work stealing has been abstracted in to a Promela model. The SPIN model checker is used to exhaustively search the intersection of states in this automaton to validate a key resiliency property of the protocol. It asserts that an initially empty supervised future on the supervisor node will eventually be full in the presence of all possible combinations of failures. The performance of HdpH-RS is measured using five benchmarks. Supervised scheduling achieves a speedup of 757 with explicit task placement and 340 with lazy work stealing when executing Summatory Liouville up to 1400 cores of a HPC architecture. Moreover, supervision overheads are consistently low scaling up to 1400 cores. Low recovery overheads are observed in the presence of frequent failure when lazy on-demand work stealing is used. A Chaos Monkey mechanism has been developed for stress testing resiliency with random failure combinations. All unit tests pass in the presence of random failure, terminating with the expected results

    Study of Fine-Grained, Irregular Parallel Applications on a Many-Core Processor

    Get PDF
    This dissertation demonstrates the possibility of obtaining strong speedups for a variety of parallel applications versus the best serial and parallel implementations on commodity platforms. These results were obtained using the PRAM-inspired Explicit Multi-Threading (XMT) many-core computing platform, which is designed to efficiently support execution of both serial and parallel code and switching between the two. Biconnectivity: For finding the biconnected components of a graph, we demonstrate speedups of 9x to 33x on XMT relative to the best serial algorithm using a relatively modest silicon budget. Further evidence suggests that speedups of 21x to 48x are possible. For graph connectivity, we demonstrate that XMT outperforms two contemporary NVIDIA GPUs of similar or greater silicon area. Prior studies of parallel biconnectivity algorithms achieved at most a 4x speedup, but we could not find biconnectivity code for GPUs to compare biconnectivity against them. Triconnectivity: We present a parallel solution to the problem of determining the triconnected components of an undirected graph. We obtain significant speedups on XMT over the only published optimal (linear-time) serial implementation of a triconnected components algorithm running on a modern CPU. To our knowledge, no other parallel implementation of a triconnected components algorithm has been published for any platform. Burrows-Wheeler compression: We present novel work-optimal parallel algorithms for Burrows-Wheeler compression and decompression of strings over a constant alphabet and their empirical evaluation. To validate these theoretical algorithms, we implement them on XMT and show speedups of up to 25x for compression, and 13x for decompression, versus bzip2, the de facto standard implementation of Burrows-Wheeler compression. Fast Fourier transform (FFT): Using FFT as an example, we examine the impact that adoption of some enabling technologies, including silicon photonics, would have on the performance of a many-core architecture. The results show that a single-chip many-core processor could potentially outperform a large high-performance computing cluster. Boosted decision trees: This chapter focuses on the hybrid memory architecture of the XMT computer platform, a key part of which is a flexible all-to-all interconnection network that connects processors to shared memory modules. First, to understand some recent advances in GPU memory architecture and how they relate to this hybrid memory architecture, we use microbenchmarks including list ranking. Then, we contrast the scalability of applications with that of routines. In particular, regardless of the scalability needs of full applications, some routines may involve smaller problem sizes, and in particular smaller levels of parallelism, perhaps even serial. To see how a hybrid memory architecture can benefit such applications, we simulate a computer with such an architecture and demonstrate the potential for a speedup of 3.3X over NVIDIA's most powerful GPU to date for XGBoost, an implementation of boosted decision trees, a timely machine learning approach. Boolean satisfiability (SAT): SAT is an important performance-hungry problem with applications in many problem domains. However, most work on parallelizing SAT solvers has focused on coarse-grained, mostly embarrassing parallelism. Here, we study fine-grained parallelism that can speed up existing sequential SAT solvers. We show the potential for speedups of up to 382X across a variety of problem instances. We hope that these results will stimulate future research
    corecore