401,309 research outputs found

    A survey on parallel and distributed Multi-Agent Systems

    No full text
    International audienceSimulation has become an indispensable tool for researchers to explore systems without having recourse to real experiments. Depending on the characteristics of the modeled system, methods used to represent the system may vary. Multi-agent systems are, thus, often used to model and simulate complex systems. Whatever modeling type used, increasing the size and the precision of the model increases the amount of computation, requiring the use of parallel systems when it becomes too large. In this paper, we focus on parallel platforms that support multi-agent simulations. Our contribution is a survey on existing platforms and their evaluation in the context of high performance computing. We present a qualitative analysis, mainly based on platform properties, then a performance comparison using the same agent model implemented on each platform

    Comparing a Traditional and a Multi-Agent Load-Balancing System

    Get PDF
    This article presents a comparison between agent and non-agent based approaches to building network-load-balancing systems. In particular, two large software systems are compared, one traditional and the other agent-based, both performing the same load balancing functions. Due to the two different architectures, several differences emerge. The differences are analyzed theoretically and practically in terms of design, scalability and fault-tolerance. The advantages and disadvantages of both approaches are presented by combining an analysis of the system and gathering the experience of designers, developers and users. Traditionally, designers specify rigid software structure, while for multi-agent systems the emphasis is on specifying the different tasks and roles, as well as the interconnections between the agents that cooperate autonomously and simultaneously. The major advantages of the multi-agent approach are the introduced abstract design layers and, as a consequence, the more comprehendible top-level design, the increased redundancy, and the improved fault tolerance. The major improvement in performance due to the agent architecture is observed in the case of one or more failed computers. Although the agent-oriented design might not be a silver bullet for building large distributed systems, our analysis and application confirm that it does have a number of advantages over non-agent approaches

    Web-based Experiment on Human Performance in Dual-Robot Teleoperation

    Full text link
    In most cases, upgrading from a single-robot system to a multi-robot system comes with increases in system payload and task performance. On the other hand, many multi-robot systems in open environments still rely on teleoperation. Therefore, human performance can be the bottleneck in a teleoperated multi-robot system. Based on this idea, the multi-robot system's shared autonomy and control methods are emerging research areas in open environment robot operations. However, the question remains: how much does the bottleneck of the human agent impact the system performance in a multi-robot system? This research tries to explore the question through the performance comparison of teleoperating a single-robot system and a dual-robot system in a box-pushing task. This robot teleoperation experiment on human agents employs a web-based environment to simulate the robots' two-dimensional movement. The result provides evidence of the hardship for a single human when teleoperating with more than one robot, which indicates the necessity of shared autonomy in multi-robot systems

    Fundamental Performance Limitations for Average Consensus in Open Multi-Agent Systems

    Full text link
    We derive fundamental performance limitations for intrinsic average consensus problems in open multi-agent systems, which are systems subject to frequent arrivals and departures of agents. Each agent holds a value, and the objective of the agents is to collaboratively estimate the average of the values of the agents presently in the system. Algorithms solving such problems in open systems are poised to never converge because of the permanent variations in the composition, size and objective pursued by the agents of the system. We provide lower bounds on the expected Mean Square Error of averaging algorithms in open systems of fixed size. Our derivation is based on the analysis of an algorithm that achieves optimal performance for a given model of replacements. We obtain a general bound that depends on the properties of the model defining the interactions between the agents, and instantiate that result for all-to-one and one-to-one interaction models. A comparison between those bounds and algorithms implementable with those models is then provided to highlight their validity.Comment: 14 pages, 9 figures, submitted to IEEE Transactions on Automatic Contro

    Improvement in the distribution of services in multi-agent systems with SCODA

    Get PDF
    The distribution of services on multi-agent systems allows it to reduce to the agents their computational load. The functionality of the system does not reside in the agents themselves, however it is ubiquitously distributed so that allows you to perform tasks in parallel avoiding an additional computational cost to the elements in the system. The distribution of services that offers SCODA (Distributed and Specialized Agent Communities) allows an intelligent management of these services provided by agents of the system and the parallel execution of threads that allow to respond to requests asynchronously, which implies an improvement in the performance of the system at both the computational level as the level of quality of service in the control of these services. The comparison carried out in the case of study that is presented in this paper demonstrates the existing improvement in the distribution of services on systems based on SCODA

    Single- versus Multi-Channel Distribution Strategies in the German Life Insurance Market: A Cost and Profit Efficiency Analysis

    Get PDF
    Until its liberalisation in 1994 exclusive agents dominated the distribution of products in the German life insurance industry. Since then, their importance has been declining for the benefit of both distribution via direct distribution channel and independent agents. However, the market shares of specialized direct and independent agent insurers have remained small, while multi-channel insurers increasingly incorporate direct and independent distribution channels, and represent the dominant distribution strategy. The aim of this paper is twofold: First, it analyses the performance of single and multi-channel distribution firms in the German life insurance. Thus, we are able to explain the development and the coexistence of the industries' distribution systems. Our study contributes to research on coexistence of different distribution systems in insurance industry which had been limited to the comparison of exclusive versus independent agent insurers so far. Second, our paper gives insight into cost and profit efficiency levels of German life insurance firms for the period 1997-2005, and delivers information about scale economies in the German life insurance industry. Applying an empirical framework developed by Berger et al. (1997) we estimate cost and profit efficiency for three groups of life insurance firms differing in their distribution systems: multichannel insurers, direct insurers, and independent agent insurers. Non-parametric DEA is used to estimate efficiencies for a sample of German life insurers for the years 1997-2005. Testing a set of hypothesis, we find economic evidence for the coexistence of the different distribution systems which is the absence of comparative performance advantages of specialised insurers. Further, we find evidence for scale economies in the German life insurance industry.Insurance markets, distribution systems, efficiency analysis

    A Hybrid Optimization Algorithm for Efficient Virtual Machine Migration and Task Scheduling Using a Cloud-Based Adaptive Multi-Agent Deep Deterministic Policy Gradient Technique

    Get PDF
    This To achieve optimal system performance in the quickly developing field of cloud computing, efficient resource management—which includes accurate job scheduling and optimized Virtual Machine (VM) migration—is essential. The Adaptive Multi-Agent System with Deep Deterministic Policy Gradient (AMS-DDPG) Algorithm is used in this study to propose a cutting-edge hybrid optimization algorithm for effective virtual machine migration and task scheduling. An sophisticated combination of the War Strategy Optimization (WSO) and Rat Swarm Optimizer (RSO) algorithms, the Iterative Concept of War and Rat Swarm (ICWRS) algorithm is the foundation of this technique. Notably, ICWRS optimizes the system with an amazing 93% accuracy, especially for load balancing, job scheduling, and virtual machine migration. The VM migration and task scheduling flexibility and efficiency are greatly improved by the AMS-DDPG technology, which uses a powerful combination of deterministic policy gradient and deep reinforcement learning. By assuring the best possible resource allocation, the Adaptive Multi-Agent System method enhances decision-making even more. Performance in cloud-based virtualized systems is significantly enhanced by our hybrid method, which combines deep learning and multi-agent coordination. Extensive tests that include a detailed comparison with conventional techniques verify the effectiveness of the suggested strategy. As a consequence, our hybrid optimization approach is successful. The findings show significant improvements in system efficiency, shorter job completion times, and optimum resource utilization. Cloud-based systems have unrealized potential for synergistic optimization, as shown by the integration of ICWRS inside the AMS-DDPG framework. Enabling a high-performing and sustainable cloud computing infrastructure that can adapt to the changing needs of modern computing paradigms is made possible by this strategic resource allocation, which is attained via careful computational utilization
    • 

    corecore