180 research outputs found

    Execution replay and debugging

    Full text link
    As most parallel and distributed programs are internally non-deterministic -- consecutive runs with the same input might result in a different program flow -- vanilla cyclic debugging techniques as such are useless. In order to use cyclic debugging tools, we need a tool that records information about an execution so that it can be replayed for debugging. Because recording information interferes with the execution, we must limit the amount of information and keep the processing of the information fast. This paper contains a survey of existing execution replay techniques and tools.Comment: In M. Ducasse (ed), proceedings of the Fourth International Workshop on Automated Debugging (AADebug 2000), August 2000, Munich. cs.SE/001003

    Millipede: A graphical tool for debugging distributed systems with a multilevel approach

    Full text link
    Much research and development has been applied to the problem of debugging computer programs. Unfortunately, most of this effort has been applied to solving the problem for traditional sequential programs with little attention paid to the parallel and distributed domains. Tracking down and fixing bugs in a parallel or distributed environment presents unique challenges for which these traditional sequential tools are simply not adequate. This thesis describes the development and usage of the Millipede debugging system, a graphical tool that applies the novel technique of multilevel debugging to the distributed debugging problem. By providing a user interface that offers the abstractions, flexibility, and granularity to handle the unique challenges that arise in this field, Millipede presents the user with an effective and compelling environment for the debugging of parallel and distributed programs, while avoiding many of the pitfalls encountered by its predecessors

    A debugging engine for parallel and distributed programs

    Get PDF
    Dissertação apresentada para a obtenção do Grau de Doutor em Informática pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia.In the last decade a considerable amount of research work has focused on distributed debugging, one of the crucial fields in the parallel software development cycle. The productivity of the software development process strongly depends on the adequate definition of what debugging tools should be provided, and what debugging methodologies and functionalities should these tools support. The work described in this dissertation was initiated in 1995, in the context of two research projects, the SEPP (Software Engineering for Parallel Processing) and HPCTI (High-Performance Computing Tools for Industry), both sponsored by the European Union in the Copernicus programme, which aimed at the design and implementation of an integrated parallel software development environment. In the context of these projects, two independent toolsets have been developed, the GRADE and EDPEPPS parallel software development environments. Our contribution to these projects was in the debugging support. We have designed a debugging engine and developed a prototype, which was integrated the both toolsets (it was the only tool developed in the context of the SEPP and HPCTI projects which achieved such a result). Even after the closing of those research projects, further research work on distributed debugger has been carried on, which conducted to the re-design and re-implementation of the debugging engine. This dissertation describes the debugging engine according to its most up-to-date design and implementation stages. It also reposts some of the experimentalworkmade with both the initial and the current implementations, and how it contributed to validate the design and implementations of the debugging engine

    Checkpointing of parallel applications in a Grid environment

    Get PDF
    The Grid environment is generic, heterogeneous, and dynamic with lots of unreliable resources making it very exposed to failures. The environment is unreliable because it is geographically dispersed involving multiple autonomous administrative domains and it is composed of a large number of components. Examples of failures in the Grid environment can be: application crash, Grid node crash, network failures, and Grid system component failures. These types of failures can affect the execution of parallel/distributed application in the Grid environment and so, protections against these faults are crucial. Therefore, it is essential to develop efficient fault tolerant mechanisms to allow users to successfully execute Grid applications. One of the research challenges in Grid computing is to be able to develop a fault tolerant solution that will ensure Grid applications are executed reliably with minimum overhead incurred. While checkpointing is the most common method to achieve fault tolerance, there is still a lot of work to be done to improve the efficiency of the mechanism. This thesis provides an in-depth description of a novel solution for checkpointing parallel applications executed on a Grid. The checkpointing mechanism implemented allows to checkpoint an application at regions where there is no interprocess communication involved and therefore reducing the checkpointing overhead and checkpoint size

    MPIWiz: subgroup reproducible replay of MPI applications

    Get PDF
    ABSTRACT Message Passing Interface (MPI) is a widely used standard for managing coarse-grained concurrency on distributed computers. Debugging parallel MPI applications, however, has always been a particularly challenging task due to their high degree of concurrent execution and non-deterministic behavior. Deterministic replay is a potentially powerful technique for addressing these challenges, with existing MPI replay tools adopting either data-replay or orderreplay approaches. Unfortunately, each approach has its tradeoffs. Data-replay generates substantial log sizes by recording every communication message. Order-replay generates small logs, but requires all processes to be replayed together. We believe that these drawbacks are the primary reasons that inhibit the wide adoption of deterministic replay as the critical enabler of cyclic debugging of MPI applications. This paper describes subgroup reproducible replay (SRR), a hybrid deterministic replay method that provides the benefits of both data-replay and order-replay while balancing their trade-offs. SRR divides all processes into disjoint groups. It records the contents of messages crossing group boundaries as in data-replay, but records just message orderings for communication within a group as in order-replay. In this way, SRR can exploit the communication locality of traffic patterns in MPI applications. During replay, developers can then replay each group individually. SRR reduces recording overhead by not recording intra-group communication, and at the same time reduces replay overhead by limiting the size of each replay group. Exposing these tradeoffs gives the user the necessary control for making deterministic replay practical for MPI applications. We have implemented a prototype, MPIWiz, to demonstrate and evaluate SRR. MPIWiz employs a replay framework that allows transparent binary instrumentation of both library and system calls. As a result, MPIWiz replays MPI applications with no source code modification and relinking, and handles non-determinism in both MPI and OS system calls. Our preliminary results show that MPIWiz can reduce recording overhead by over a factor of four relative to data-replay, yet without requiring the entire application to be replayed as in order-replay. Recording increases execution time by 27% while the application can be replayed in just 53% of its base execution time

    Estudo sobre processamento maciçamente paralelo na internet

    Get PDF
    Orientador: Marco Aurélio Amaral HenriquesTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia Eletrica e de ComputaçãoResumo: Este trabalho estuda a possibilidade de aproveitar o poder de processamento agregado dos computadores conectados pela Internet para resolver problemas de grande porte. O trabalho apresenta um estudo do problema tanto do ponto de vista teórico quanto prático. Desde o ponto de vista teórico estudam-se as características das aplicações paralelas que podem tirar proveito de um ambiente computacional com um grande número de computadores heterogêneos fracamente acoplados. Desde o ponto de vista prático estudam-se os problemas fundamentais a serem resolvidos para se construir um computador paralelo virtual com estas características e propõem-se soluções para alguns dos mais importantes como balanceamento de carga e tolerância a falhas. Os resultados obtidos indicam que é possível construir um computador paralelo virtual robusto, escalável e tolerante a falhas e obter bons resultados na execução de aplicações com alta razão computação/comunicaçãoAbstract: This thesis explores the possibility of using the aggregated processing power of computers connected by the Internet to solve large problems. The issue is studied both from the theoretical and practical point of views. From the theoretical perspective this work studies the characteristics that parallel applications should have to be able to exploit an environment with a large, weakly connected set of computers. From the practical perspective the thesis indicates the fundamental problems to be solved in order to construct a large parallel virtual computer, and proposes solutions to some of the most important of them, such as load balancing and fault tolerance. The results obtained so far indicate that it is possible to construct a robust, scalable and fault tolerant parallel virtual computer and use it to execute applications with high computing/communication ratioDoutoradoEngenharia de ComputaçãoDoutor em Engenharia Elétric

    RLQ: Workload Allocation With Reinforcement Learning in Distributed Queues

    Get PDF
    Distributed workload queues are nowadays widely used due to their significant advantages in terms of decoupling, resilience, and scaling. Task allocation to worker nodes in distributed queue systems is typically simplistic (e.g., Least Recently Used) or uses hand-crafted heuristics that require task-specific information (e.g., task resource demands or expected time of execution). When such task information is not available and worker node capabilities are not homogeneous, the existing placement strategies may lead to unnecessarily large execution timings and usage costs. In this work, we formulate the task allocation problem in the Markov Decision Process framework, in which an agent assigns tasks to an available resource, and receives a numerical reward signal upon task completion. Our adaptive and learning-based task allocation solution, Reinforcement Learning based Queues ( RLQ ), is implemented and integrated with the popular Celery task queuing system for Python. We compare RLQ against traditional solutions using both synthetic and real workload traces. On average, using synthetic workloads, RLQ reduces the execution cost by approximately 70%, the execution time by a factor of at least 3×, and the waiting time by almost 7×. Using real traces, we observe an improvement of about 20% for execution cost, around 70% improvement for execution time, and a reduction of approximately 20× in waiting time. We also compare RLQ with a strategy inspired by E-PVM, a state-of-the-art solution used in Google's Borg cluster manager, showing we are able to outperform it in five out of six scenarios

    RLQ: Workload Allocation With Reinforcement Learning in Distributed Queues

    Get PDF
    Distributed workload queues are nowadays widely used due to their significant advantages in terms of decoupling, resilience, and scaling. Task allocation to worker nodes in distributed queue systems is typically simplistic (e.g., Least Recently Used) or uses hand-crafted heuristics that require task-specific information (e.g., task resource demands or expected time of execution). When such task information is not available and worker node capabilities are not homogeneous, the existing placement strategies may lead to unnecessarily large execution timings and usage costs. In this work, we formulate the task allocation problem in the Markov Decision Process framework, in which an agent assigns tasks to an available resource, and receives a numerical reward signal upon task completion. Our adaptive and learning-based task allocation solution, Reinforcement Learning based Queues (RLQ), is implemented and integrated with the popular Celery task queuing system for Python. We compare RLQ against traditional solutions using both synthetic and real workload traces. On average, using synthetic workloads, RLQ reduces the execution cost by approximately 70%, the execution time by a factor of at least 3×, and the waiting time by almost 7×. Using real traces, we observe an improvement of about 20% for execution cost, around 70% improvement for execution time, and a reduction of approximately 20× in waiting time. We also compare RLQ with a strategy inspired by E-PVM, a state-of-the-art solution used in Google's Borg cluster manager, showing we are able to outperform it in five out of six scenarios
    corecore