323 research outputs found

    Parallel Homologous Search With Hirschberg Algorithm: A Hybrid MPI-Pthreads Solution.

    Get PDF
    In this paper, we apply two different parallel programming model, the message passing model using Message Passing Interface (MPI) and the multithreaded model using Pthreads, to-protein sequence homologous search. The protein sequence homologous search uses Hirschberg algorithm for the pair-wise sequence alignment

    Reliable massively parallel symbolic computing : fault tolerance for a distributed Haskell

    Get PDF
    As the number of cores in manycore systems grows exponentially, the number of failures is also predicted to grow exponentially. Hence massively parallel computations must be able to tolerate faults. Moreover new approaches to language design and system architecture are needed to address the resilience of massively parallel heterogeneous architectures. Symbolic computation has underpinned key advances in Mathematics and Computer Science, for example in number theory, cryptography, and coding theory. Computer algebra software systems facilitate symbolic mathematics. Developing these at scale has its own distinctive set of challenges, as symbolic algorithms tend to employ complex irregular data and control structures. SymGridParII is a middleware for parallel symbolic computing on massively parallel High Performance Computing platforms. A key element of SymGridParII is a domain specific language (DSL) called Haskell Distributed Parallel Haskell (HdpH). It is explicitly designed for scalable distributed-memory parallelism, and employs work stealing to load balance dynamically generated irregular task sizes. To investigate providing scalable fault tolerant symbolic computation we design, implement and evaluate a reliable version of HdpH, HdpH-RS. Its reliable scheduler detects and handles faults, using task replication as a key recovery strategy. The scheduler supports load balancing with a fault tolerant work stealing protocol. The reliable scheduler is invoked with two fault tolerance primitives for implicit and explicit work placement, and 10 fault tolerant parallel skeletons that encapsulate common parallel programming patterns. The user is oblivious to many failures, they are instead handled by the scheduler. An operational semantics describes small-step reductions on states. A simple abstract machine for scheduling transitions and task evaluation is presented. It defines the semantics of supervised futures, and the transition rules for recovering tasks in the presence of failure. The transition rules are demonstrated with a fault-free execution, and three executions that recover from faults. The fault tolerant work stealing has been abstracted in to a Promela model. The SPIN model checker is used to exhaustively search the intersection of states in this automaton to validate a key resiliency property of the protocol. It asserts that an initially empty supervised future on the supervisor node will eventually be full in the presence of all possible combinations of failures. The performance of HdpH-RS is measured using five benchmarks. Supervised scheduling achieves a speedup of 757 with explicit task placement and 340 with lazy work stealing when executing Summatory Liouville up to 1400 cores of a HPC architecture. Moreover, supervision overheads are consistently low scaling up to 1400 cores. Low recovery overheads are observed in the presence of frequent failure when lazy on-demand work stealing is used. A Chaos Monkey mechanism has been developed for stress testing resiliency with random failure combinations. All unit tests pass in the presence of random failure, terminating with the expected results

    Portable high-performance programs

    Get PDF
    Thesis (Ph.D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.Includes bibliographical references (p. 159-169).by Matteo Frigo.Ph.D

    The Use Of Cultural Algorithms To Learn The Impact Of Climate On Local Fishing Behavior In Cerro Azul, Peru

    Get PDF
    Recently it has been found that the earth’s oceans are warming at a pace that is 40% faster than predicted by a United Nations panel a few years ago. As a result, 2019 has become the warmest year on record for the earth’s oceans. That is because the oceans have acted as a buffer by absorbing 93% of the heat produced by the greenhouse gases [40]. The impact of the oceanic warming has already been felt in terms of the periodic warming of the Pacific Ocean as an effect of the ENSO process. The ENSO process is a cycle of warming and subsequent cooling of the Pacific Ocean that can last over a period of years. This cycle was first documented by Peruvian fishermen in the early 1600’s. So it has been part of the environmental challenges that have been presented to economic agents throughout the world since then. It has even been suggested that the cycle has increased in frequency over the years, perhaps in response to the overall issues related to global warming. Although the onset of the ENSO cycle might be viewed as disruption of the fishing economy in a given area, there is some possibility that over time agents have been able to develop strategic responses to these changes to as to reduce the economic risk associated with them. During that time the Cerro Azul, Peru was in the process of emerging from one of the largest ENSOs on record. This was perceived to be a great opportunity to see how the collective bodies of fishermen were able to alter their fishing strategies to deal with these more uncertain times. Our results suggest that indeed the collective economic response of the fishermen demonstrates an ability to respond to the unpredictabilities of climate change, but at a cost. It is clear that the fishermen have gained the collective knowledge over the years to produce a coordinated response that can be observed at a higher level. Of course, this knowledge can be used to coordinate activities only if it is communicated socially within the society. Although our data does not provide any explicit information about such communication there is some indirect evidence that the adjustments in strategy are brought about by the increased exchange of experiences among the fishermen

    Graph models of habitat mosaics

    Get PDF
    Graph theory is a body of mathematics dealing with problems of connectivity, flow, and routing in networks ranging from social groups to computer networks. Recently, network applications have erupted in many fields, and graph models are now being applied in landscape ecology and conservation biology, particularly for applications couched in metapopulation theory. In these applications, graph nodes represent habitat patches or local populations and links indicate functional connections among populations (i.e. via dispersal). Graphs are models of more complicated real systems, and so it is appropriate to review these applications from the perspective of modelling in general. Here we review recent applications of network theory to habitat patches in landscape mosaics. We consider (1) the conceptual model underlying these applications; (2) formalization and implementation of the graph model; (3) model parameterization; (4) model testing, insights, and predictions available through graph analyses; and (5) potential implications for conservation biology and related applications. In general, and for a variety of ecological systems, we find the graph model a remarkably robust framework for applications concerned with habitat connectivity. We close with suggestions for further work on the parameterization and validation of graph models, and point to some promising analytic insights. © 2009 Blackwell Publishing Ltd/CNRS

    Triaxial Accelerometer Error Coefficients Identification with a Novel Artificial Fish Swarm Algorithm

    Get PDF
    Artificial fish swarm algorithm (AFSA) is one of the state-of-the-art swarm intelligence techniques, which is widely utilized for optimization purposes. Triaxial accelerometer error coefficients are relatively unstable with the environmental disturbances and aging of the instrument. Therefore, identifying triaxial accelerometer error coefficients accurately and being with lower costs are of great importance to improve the overall performance of triaxial accelerometer-based strapdown inertial navigation system (SINS). In this study, a novel artificial fish swarm algorithm (NAFSA) that eliminated the demerits (lack of using artificial fishes’ previous experiences, lack of existing balance between exploration and exploitation, and high computational cost) of AFSA is introduced at first. In NAFSA, functional behaviors and overall procedure of AFSA have been improved with some parameters variations. Second, a hybrid accelerometer error coefficients identification algorithm has been proposed based on NAFSA and Monte Carlo simulation (MCS) approaches. This combination leads to maximum utilization of the involved approaches for triaxial accelerometer error coefficients identification. Furthermore, the NAFSA-identified coefficients are testified with 24-position verification experiment and triaxial accelerometer-based SINS navigation experiment. The priorities of MCS-NAFSA are compared with that of conventional calibration method and optimal AFSA. Finally, both experiments results demonstrate high efficiency of MCS-NAFSA on triaxial accelerometer error coefficients identification

    Architecture aware parallel programming in Glasgow parallel Haskell (GPH)

    Get PDF
    General purpose computing architectures are evolving quickly to become manycore and hierarchical: i.e. a core can communicate more quickly locally than globally. To be effective on such architectures, programming models must be aware of the communications hierarchy. This thesis investigates a programming model that aims to share the responsibility of task placement, load balance, thread creation, and synchronisation between the application developer and the runtime system. The main contribution of this thesis is the development of four new architectureaware constructs for Glasgow parallel Haskell that exploit information about task size and aim to reduce communication for small tasks, preserve data locality, or to distribute large units of work. We define a semantics for the constructs that specifies the sets of PEs that each construct identifies, and we check four properties of the semantics using QuickCheck. We report a preliminary investigation of architecture aware programming models that abstract over the new constructs. In particular, we propose architecture aware evaluation strategies and skeletons. We investigate three common paradigms, such as data parallelism, divide-and-conquer and nested parallelism, on hierarchical architectures with up to 224 cores. The results show that the architecture-aware programming model consistently delivers better speedup and scalability than existing constructs, together with a dramatic reduction in the execution time variability. We present a comparison of functional multicore technologies and it reports some of the first ever multicore results for the Feedback Directed Implicit Parallelism (FDIP) and the semi-explicit parallelism (GpH and Eden) languages. The comparison reflects the growing maturity of the field by systematically evaluating four parallel Haskell implementations on a common multicore architecture. The comparison contrasts the programming effort each language requires with the parallel performance delivered. We investigate the minimum thread granularity required to achieve satisfactory performance for three implementations parallel functional language on a multicore platform. The results show that GHC-GUM requires a larger thread granularity than Eden and GHC-SMP. The thread granularity rises as the number of cores rises

    Proceedings of the First PhD Symposium on Sustainable Ultrascale Computing Systems (NESUS PhD 2016)

    Get PDF
    Proceedings of the First PhD Symposium on Sustainable Ultrascale Computing Systems (NESUS PhD 2016) Timisoara, Romania. February 8-11, 2016.The PhD Symposium was a very good opportunity for the young researchers to share information and knowledge, to present their current research, and to discuss topics with other students in order to look for synergies and common research topics. The idea was very successful and the assessment made by the PhD Student was very good. It also helped to achieve one of the major goals of the NESUS Action: to establish an open European research network targeting sustainable solutions for ultrascale computing aiming at cross fertilization among HPC, large scale distributed systems, and big data management, training, contributing to glue disparate researchers working across different areas and provide a meeting ground for researchers in these separate areas to exchange ideas, to identify synergies, and to pursue common activities in research topics such as sustainable software solutions (applications and system software stack), data management, energy efficiency, and resilience.European Cooperation in Science and Technology. COS
    corecore