9 research outputs found

    Contention-aware metrics: analysis of distributed algorithms

    Get PDF
    Resource contention is widely recognized as having a major impact on the performance of distributed algorithms. Nevertheless, the metrics that are commonly used to predict their performance take little or no account of contention. In this paper, we define two performance metrics for distributed algorithms that account for network contention as well as CPU contention. We then illustrate the use of these metrics by comparing four Atomic Broadcast algorithms, and show that our metrics allow for a deeper understanding of performance issues than conventional metrics

    FATMAS: a methodology to design fault-tolerant multi-agent systems

    Get PDF
    Un système multi-agent (SMA) est un système dans lequel plusieurs agents opèrent et interagissent. Chaque agent a la responsabilité d’exécuter des tâches. Cependant, chaque agent, pour diverses raisons, peut rencontrer des problèmes pendant l’exécution de ses tâches ; ce qui peut induire un disfonctionnement du SMA. Cependant, le SMA doit être en mesure de détecter les sources de problèms (d’erreurs) afin de les contrôler et ainsi continuer son exécution correctement. Un tel SMA est appelé un SMA tolérant aux fautes. Il existe deux types de sources d’erreurs pour un agent : les erreurs causées par son environnment et les erreurs dûes à sa programmation. Dans la littérature, il existe plusieurs techniques qui traitent des erreurs de programmation au niveau des agents. Cependant, ces techniques ne traitent pas des erreurs causées par l’environnement de l’agent. Tout d’abord, nous distinguons entre l’environnment d’un agent et l’environnement du SMA. L’environnement d’un agent représente toutes les composantes matérielles ou logicielles que l’agent ne peut contrôler mais avec lesquelles il interagit. Cependant, l’environnment du SMA représente toutes les composantes que le système ne contrôle pas mais avec lesquelles il interagit. Ainsi, le SMA peut contrôler certaines des composantes avec lesquelles un agent interagit. Ainsi, une composante peut appartenir à l’environnement d’un agent et ne pas appartenir à l’environnement du système. Dans ce travail, nous présentons une méthodologie de conception de SMA tolérants aux fautes, nommée FATMAS, qui permet au concepteur du SMA de détecter et de corriger, si possible, les erreurs causées par les environnements des agents. Cette méthodologie permettra ainsi de délimiter la frontière du SMA de son environnement avec lequel il interagit. La frontière du SMA est déterminée par les différentes composantes (matérielles ou logicielles) que le système contrôle. Ainsi, le SMA, à l’intérieur de sa frontière, peut corriger les erreurs provenant de ses composantes. Cependant, le SMA n’a aucun contrôle sur toutes les composantes opérant dans son environnement. La méthodologie, que nous proposons, doit couvrir les trois premières phases d’un développement logiciel qui sont l’analyse, la conception et l’implémentation tout en intégrant, dans son processus de développement, une technique permettant au concepteur du système de délimiter la frontière du SMA et ainsi détecter les sources d’erreurs et les contrôler afin que le système multi-agent soit tolérant aux fautes (SMATF). Cependant, les méthodologies de conception de SMA, référencées dans la littérature, n’intègrent pas une telle technique. FATMAS offre au concepteur du SMATF quatre modèles pour décrire et développer le SMA ainsi qu’une technique de réorganisation du système qui lui permet de détecter et de contrôler ses sources d’erreurs, et ainsi définir la frontière du SMA. Chaque modèle est associé à un micro processus qui guide le concepteur lors du développement du modèle. FATMAS offre aussi un macro-processus, qui définit le cycle de développement de la méthodologie. FATMAS se base sur un développement itératif pour identifier et déterminer les tâches à ajouter au système afin de contrôler des sources d’erreurs. À chaque itération, le concepteur évalue, selon une fonction de coût/bénéfice s’il est opportun d’ajouter de nouvelles tâches de contrôle au système. Le premier modèle est le modèle de tâches-environnement. Il est développé lors de la phase d’analyse. Il identifie les différentes tâches que les agents doivent exécuter, leurs préconditions et leurs ressources. Ce modèle permet d’identifier différentes sources de problèmes qui peuvent causer un disfonctionnement du système. Le deuxième modèle est le modèle d’agents. Il est développé lors de la phase de conception. Il décrit les agents, leurs relations, et spécifie pour chaque agent les ressources auxquelles il a le droit d’accéder. Chaque agent exécutera un ensemble de tâches identifiées dans le modèle de tâches-environnement. Le troisième modèle est le modèle d’interaction d’agents. Il est développé lors de la phase de conception. Il décrit les échanges de messages entre les agents. Le quatrième modèle est le modèle d’implémentation. Il est développé lors de la phase d’implémentation. Il décrit l’infrastructure matérielle sur laquelle le SMA va opérer ainsi que l’environnement de développement du SMA. La méthodologie inclut aussi une technique de réorganisation. Cette technique permet de délimiter la frontière du SMA et contrôler, si possible, ses sources d’erreurs. Cette technique doit intégrer trois techniques nécessaires à la conception d’un système tolérant aux fautes : une technique de prévention d’erreurs, une technique de recouvrement d’erreurs, et une technique de tolérance aux fautes. La technique de prévention d’erreurs permet de délimiter la frontière du SMA. La technique de recouvrement d’erreurs permet de proposer une architecture du SMA pour détecter les erreurs. La technique de tolérance aux fautes permet de définir une procédure de réplication d’agents et de tâches dans le SMA pour que le SMA soit tolérant aux fautes. Cette dernière technique, à l’inverse des techniques de tolérance aux fautes existantes, réplique les tâches et les agents et non seulement les agents. Elle permet ainsi de réduire la complexité du système en diminuant le nombre d’agents à répliquer. Résumé iv De même, un agent peut ne pas être en erreur mais la composante matérielle sur laquelle il est exécuté peut ne plus être fonctionnelle. Ce qui constitue une source d’erreurs pour le SMA. Il faudrait alors que le SMA continue à s’exécuter correctement malgrè le disfonctionnement d’une composante. FATMAS fournit alors un support au concepteur du système pour tenir compte de ce type d’erreurs soit en contrôlant les composantes matérielles, soit en proposant une distribution possible des agents sur les composantes matérielles disponibles pour que le disfonctionnement d’une composante matérielle n’affecte pas le fonctionnement du SMA. FATMAS permet d’identifier des sources d’erreurs lors de la phase de conception du système. Cependant, elle ne traite pas des sources d’erreurs de programmation. Ainsi, la technique de réorganization proposée dans ce travail sera validée par rapport aux sources d’erreurs identifiées lors de la phase de conception et provenant de la frontière du SMA. Nous démontrerons formellement que, si une erreur provient d’une composante que le SMA contrôle, le SMA devrait être opérationnel. Cependant, FATMAS ne certifie pas que le futur système sera toujours opérationnel car elle ne traîte pas des erreurs de programmation ou des erreurs causées par son environnement.A multi-agent system (MAS) consists of several agents interacting together. In a MAS, each agent performs several tasks. However, each agent is prone to individual failures so that it can no longer perform its tasks. This can lead the MAS to a failure. Ideally, the MAS should be able to identify the possible sources of failures and try to overcome them in order to continue operating correctly ; we say that it should be fault-tolerant. There are two kinds of sources of failures to an agent : errors originating from the environment with which the agents interacts, and programming exceptions. There are several works on fault-tolerant systems which deals with programming exceptions. However, these techniques does not allow the MAS to identify errors originating from an agent’s environment. In this thesis, we propose a design methodology, called FATMAS, which allows a MAS designer to identify errors originating from agents’ environments. Doing so, the designer can determine the sources of failures it could be able to control and those it could not. Hence, it can determine the errors it can prevent and those it cannot. Consequently, this allows the designer to determine the system’s boundary from its environment. The system boundary is the area within which the decision-taking process of the MAS has power to make things happen, or prevent them from happening.We distinguish between the system’s environment and an agent’s environment. An agent’s environment is characterized by the components (hardware or software) that the agent does not control. However, the system may control some of the agent’s environment components. Consequently, some of the agent’s environment components may not be a part of the system’s environment. The development of a fault-tolerant MAS (FTMAS) requires the use of a methodology to design FTMAS and of a reorganization technique that will allow the MAS designer to identify and control, if possible, different sources of system failure. However, current MAS design methodologies do not integrate such a technique. FATMAS provides four models used to design and implement the target system and a reorganization technique to assist the designer in identifying and controlling different sources of system’s failures. FATMAS also provides a macro process which covers the entire life cycle of the system development as well as several micro processes that guide the designer when developing each model. The macro-process is based on an iterative approach based on a cost/benefit evaluation to help the designer to decide whether to go from one iteration to another. The methodology has three phases : analysis, design, and implementation. The analysis phase develops the task-environment model. This model identifies the different tasks the agents will perform, their resources, and their preconditions. It identifies several possible sources of system failures. The design phase develops the agent model and the agent interaction model. The agent model describes the agents and their resources. Each agent performs several tasks identified in the task-environment model. The agent interaction model describes the messages exchange between agents. The implementation phase develops the implementation model, and allows an automatic code generation of Java agents. The implementation model describes the infrastructure upon which the MAS will operate and the development environment to be used when developing the MAS. The reorganization technique includes three techniques required to design a fault-tolerant system : a fault-prevention technique, a fault-recovery technique, and a fault-tolerance technique. The fault-prevention technique assists the designer in delimiting the system’s boundary. The fault-recovery technique proposes a MAS architecture allowing it to detect failures. The fault-tolerance technique is based on agent and task redundancy. Contrary to existing fault-tolerance techniques, this technique replicates tasks and agents and not only agents. Thus, it minimizes the system complexity by minimizing the number of agents operating in the system. Furthermore, FATMAS helps the designer to deal with possible physical component failures, on which the MAS will operate. It proposes a way to either control these components or to distribute the agents on these components in such a way that if a component is in failure, then the MAS could continue operating properly. The FATMAS methodology presented in this dissertation assists a designer, in its development process, to build fault-tolerant systems. It has the following main contributions : 1. it allows to identify different sources of system failure ; 2. it proposes to introduce new tasks in a MAS to control the identified sources of failures ; 3. it proposes a mechanism which automatically determines which tasks (agents) should be replicated and in which other agents ; 4. it reduces the system complexity by minimizing the replication of agents ; Abstract vii 5. it proposes a MAS reorganization technique which is embedded within the designed MAS and assists the designer to determine the system’s boundary. It proposes a MAS architecture to detect and recover from failures originating from the system boundary. Moreover, it proposes a way to distribute agents on the physical components so that the MAS could continue operating properly in case of a component failure. This could make the MAS more robust to fault prone environments. FATMAS alows to determine different sources of failures of a MAS. The MAS controls the sources of failures situated in its boundary. It does not control the sources of failures situated in its environments. Consequently, the reorganization technique proposed in this dissertation will be proven valid only in the case where the sources of failures are controlled by the MAS. However, it cannot be proven that the future system is fault-tolerant since faults originating from the environment or from coding are not dealt with

    Totally Ordered Broadcast and Multicast Algorithms: A Comprehensive Survey

    Get PDF
    Total order multicast algorithms constitute an important class of problems in distributed systems, especially in the context of fault-tolerance. In short, the problem of total order multicast consists in sending messages to a set of processes, in such a way that all messages are delivered by all correct destinations in the same order. However, the huge amount of literature on the subject and the plethora of solutions proposed so far make it difficult for practitioners to select a solution adapted to their specific problem. As a result, naive solutions are often used while better solutions are ignored. This paper proposes a classification of total order multicast algorithms based on the ordering mechanism of the algorithms, and describes a set of common characteristics (e.g., assumptions, properties) with which to evaluate them. In this classification, more than fifty total order broadcast and multicast algorithms are surveyed. The presentation includes asynchronous algorithms as well as algorithms based on the more restrictive synchronous model. Fault-tolerance issues are also considered as the paper studies the properties and behavior of the different algorithms with respect to failures

    Agreement-related problems:from semi-passive replication to totally ordered broadcast

    Get PDF
    Agreement problems constitute a fundamental class of problems in the context of distributed systems. All agreement problems follow a common pattern: all processes must agree on some common decision, the nature of which depends on the specific problem. This dissertation mainly focuses on three important agreements problems: Replication, Total Order Broadcast, and Consensus. Replication is a common means to introduce redundancy in a system, in order to improve its availability. A replicated server is a server that is composed of multiple copies so that, if one copy fails, the other copies can still provide the service. Each copy of the server is called a replica. The replicas must all evolve in manner that is consistent with the other replicas. Hence, updating the replicated server requires that every replica agrees on the set of modifications to carry over. There are two principal replication schemes to ensure this consistency: active replication and passive replication. In Total Order Broadcast, processes broadcast messages to all processes. However, all messages must be delivered in the same order. Also, if one process delivers a message m, then all correct processes must eventually deliver m. The problem of Consensus gives an abstraction to most other agreement problems. All processes initiate a Consensus by proposing a value. Then, all processes must eventually decide the same value v that must be one of the proposed values. These agreement problems are closely related to each other. For instance, Chandra and Toueg [CT96] show that Total Order Broadcast and Consensus are equivalent problems. In addition, Lamport [Lam78] and Schneider [Sch90] show that active replication needs Total Order Broadcast. As a result, active replication is also closely related to the Consensus problem. The first contribution of this dissertation is the definition of the semi-passive replication technique. Semi-passive replication is a passive replication scheme based on a variant of Consensus (called Lazy Consensus and also defined here). From a conceptual point of view, the result is important as it helps to clarify the relation between passive replication and the Consensus problem. In practice, this makes it possible to design systems that react more quickly to failures. The problem of Total Order Broadcast is well-known in the field of distributed systems and algorithms. In fact, there have been already more than fifty algorithms published on the problem so far. Although quite similar, it is difficult to compare these algorithms as they often differ with respect to their actual properties, assumptions, and objectives. The second main contribution of this dissertation is to define five classes of total order broadcast algorithms, and to relate existing algorithms to those classes. The third contribution of this dissertation is to compare the expected performance of the various classes of total order broadcast algorithms. To achieve this goal, we define a set of metrics to predict the performance of distributed algorithms

    Evaluating the performance of distributed agreement algorithms:tools, methodology and case studies

    Get PDF
    Nowadays, networked computers are present in most aspects of everyday life. Moreover, essential parts of society come to depend on distributed systems formed of networked computers, thus making such systems secure and fault tolerant is a top priority. If the particular fault tolerance requirement is high availability, replication of components is a natural choice. Replication is a difficult problem as the state of the replicas must be kept consistent even if some replicas fail, and because in distributed systems, relying on centralized control or a certain timing behavior is often not feasible. Replication in distributed systems is often implemented using group communication. Group communication is concerned with providing high-level multipoint communication primitives and the associated tools. Most often, an emphasis is put on tolerating crash failures of processes. At the heart of most communication primitives lies an agreement problem: the members of a group must agree on things like the set of messages to be delivered to the application, the delivery order of messages, or the set of processes that crashed. A lot of algorithms to solve agreement problems have been proposed and their correctness proven. However, performance aspects of agreement algorithms have been somewhat neglected, for a variety of reasons: the lack of theoretical and practical tools to help performance evaluation, and the lack of well-defined benchmarks for agreement algorithms. Also, most performance studies focus on analyzing failure free runs only. In our view, the limited understanding of performance aspects, in both failure free scenarios and scenarios with failure handling, is an obstacle for adopting agreement protocols in practice, and is part of the explanation why such protocols are not in widespread use in the industry today. The main goal of this thesis is to advance the state of the art in this field. The thesis has major contributions in three domains: new tools, methodology and performance studies. As for new tools, a simulation and prototyping framework offers a practical tool, and some new complexity metrics a theoretical tool for the performance evaluation of agreement algorithms. As for methodology, the thesis proposes a set of well-defined benchmarks for atomic broadcast algorithms (such algorithms are important as they provide the basis for a number of replication techniques). Finally, three studies are presented that investigate important performance issues with agreement algorithms. The prototyping and simulation framework simplifies the tedious task of developing algorithms based on message passing, the communication model that most agreement algorithms are written for. In this framework, the same implementation can be reused for simulations and performance measurements on a real network. This characteristic greatly eases the task of validating simulation results with measurements (or vice versa). As for theoretical tools, we introduce two complexity metrics that predict performance with more accuracy than the traditional time and message complexity metrics. The key point is that our metrics take account for resource contention, both on the network and the hosts; resource contention is widely recognized as having a major impact on the performance of distributed algorithms. Extensive validation studies have been conducted. Currently, no widely accepted benchmarks exist for agreement algorithms or group communication toolkits, which makes comparing performance results from different sources difficult. In an attempt to consolidate the situation, we define a number of benchmarks for atomic broadcast. Our benchmarks include well-defined metrics, workloads and failure scenarios (faultloads). The use of the benchmarks is illustrated in two detailed case studies. Two widespread mechanisms for handling failures are unreliable failure detectors which provide inconsistent information about failures, and a group membership service which provides consistent information about failures, respectively. We analyze the performance tradeoffs of these two techniques, by comparing the performance of two atomic broadcast algorithms designed for an asynchronous system. Based on our results, we advocate a combined use of the two approaches to failure handling. In another case study, we compare two consensus algorithms designed for an asynchronous system. The two algorithms differ in how they coordinate the decision process: the one uses a centralized and the other a decentralized communication schema. Our results show that the performance tradeoffs are highly affected by a number of characteristics of the environment, like the availability of multicast and the amount of contention on the hosts versus the amount of contention on the network. Famous theoretical results state that a lot of important agreement problems are not solvable in the asynchronous system model. In our third case study, we investigate how these results are relevant for implementations of a replicated service, by conducting an experiment in a local area network. We exposed a replicated server to extremely high loads and required that the underlying failure detection service detects crashes very fast; the latter is important as the theoretical results are based on the impossibility of reliable failure detection. We found that our replicated server continued working even with the most extreme settings. We discuss the reasons for the robustness of our replicated server

    Fault tolerance using group communication

    No full text
    corecore