146 research outputs found

    Incremental DCOP Search Algorithms for Solving Dynamic DCOP Problems

    Get PDF
    Distributed constraint optimization problems (DCOPs) are wellsuited for modeling multi-agent coordination problems. However, most research has focused on developing algorithms for solving static DCOPs. In this paper, we model dynamic DCOPs as sequences of (static) DCOPs with changes from one DCOP to the next one in the sequence. We introduce the ReuseBounds procedure, which can be used by any-space ADOPT and any-space BnB-ADOPT to find cost-minimal solutions for all DCOPs in the sequence faster than by solving each DCOP individually. This procedure allows those agents that are guaranteed to remain unaffected by a change to reuse their lower and upper bounds from the previous DCOP when solving the next one in the sequence. Our experimental results show that the speedup gained from this procedure increases with the amount of memory the agents have available

    Incremental DCOP Search Algorithms for Solving Dynamic DCOP Problems

    Get PDF

    Combining search strategies for distributed constraint satisfaction.

    Get PDF
    Many real-life problems such as distributed meeting scheduling, mobile frequency allocation and resource allocation can be solved using multi-agent paradigms. Distributed constraint satisfaction problems (DisCSPs) is a framework for describing such problems in terms of related subproblems, called a complex local problem (CLP), which are dispersed over a number of locations, each with its own constraints on the values their variables can take. An agent knows the variables in its CLP plus the variables (and their current value) which are directly related to one of its own variables and the constraints relating them. It knows little about the rest of the problem. Thus, each CLP is solved by an agent which cooperates with other agents to solve the overall problem. Algorithms for solving DisCSPs can be classified as either systematic or local search with the former being complete and the latter incomplete. The algorithms generally assume that each agent has only one variable as they can solve DisCSP with CLPs using virtual agents. However, in large DisCSPs where it is appropriate to trade completeness off against timeliness, systematic search algorithms can be expensive when compared to local search algorithms which generally converge quicker to a solution (if a solution is found) when compared to systematic algorithms. A major drawback of local search algorithms is getting stuck at local optima. Significant researches have focused on heuristics which can be used in an attempt to either escape or avoid local optima. This thesis makes significant contributions to local search algorithms for DisCSPs. Firstly, we present a novel combination of heuristics in DynAPP (Dynamic Agent Prioritisation with Penalties), which is a distributed synchronous local search algorithm for solving DisCSPs having one variable per agent. DynAPP combines penalties on values and dynamic agent prioritisation heuristics to escape local optima. Secondly, we develop a divide and conquer approach that handles DisCSP with CLPs by exploiting the structure of the problem. The divide and conquer approach prioritises the finding of variable instantiations which satisfy the constraints between agents which are often more expensive to satisfy when compared to constraints within an agent. The approach also exploits concurrency and combines the following search strategies: (i) both systematic and local searches; (ii) both centralised and distributed searches; and (iii) a modified compilation strategy. We also present an algorithm that implements the divide and conquer approach in Multi-DCA (Divide and Conquer Algorithm for Agents with CLPs). DynAPP and Multi-DCA were evaluated on several benchmark problems and compared to the leading algorithms for DisCSPs and DisCSPs with CLPs respectively. The results show that at the region of difficult problems, combining search heuristics and exploiting problem structure in distributed constraint satisfaction achieve significant benefits (i.e. generally used less computational time and communication costs) over existing competing methods

    Embedding Preference Elicitation Within the Search for DCOP Solutions

    Get PDF
    The Distributed Constraint Optimization Problem(DCOP)formulation is a powerful tool to model cooperative multi-agent problems, especially when they are sparsely constrained with one another. A key assumption in this model is that all constraints are fully specified or known a priori, which may not hold in applications where constraints encode preferences of human users. In this thesis, we extend the model to Incomplete DCOPs (I-DCOPs), where some constraints can be partially specified. User preferences for these partially-specified constraints can be elicited during the execution of I-DCOP algorithms, but they incur some elicitation costs. Additionally, we propose two parameterized heuristics that can be used in conjunction with Synchronous Branch-and-Bound to solve I-DCOPs. These heuristics allow users to trade-off solution quality for faster runtimes and a smaller number of elicitations. They also provide theoretical quality guarantees for problems where elicitations are free. Our model and heuristics thus extend the state of the art in distributed constraint reasoning to better model and solve distributed agent-based applications with user preferences

    Application of Techniques for MAP Estimation to Distributed Constraint Optimization Problem

    Get PDF
    The problem of efficiently finding near-optimal decisions in multi-agent systems has become increasingly important because of the growing number of multi-agent applications with large numbers of agents operating in real-world environments. In these systems, agents are often subject to tight resource constraints and agents have only local views. When agents have non-global constraints, each of which is independent, the problem can be formalized as a distributed constraint optimization problem (DCOP). The DCOP is closely associated with the problem of inference on graphical models. Many approaches from inference literature have been adopted to solve DCOPs. We focus on the Max-Sum algorithm and the Action-GDL algorithm that are DCOP variants of the popular inference algorithm called the Max-Product algorithm and the Belief Propagation algorithm respectively. The Max-Sum algorithm and the Action-GDL algorithm are well-suited for multi-agent systems because it is distributed by nature and requires less communication than most DCOP algorithms. However, the resource requirements of these algorithms are still high for some multi-agent domains and various aspects of the algorithms have not been well studied for use in general multi-agent settings. This thesis is concerned with a variety of issues of applying the Max-Sum algorithms and the Action-GDL algorithm to general multi-agent settings. We develop a hybrid algorithm of ADOPT and Action-GDL in order to overcome the communication complexity of DCOPs. Secondly, we extend the Max-Sum algorithm to operate more efficiently in more general multi-agent settings in which computational complexity is high. We provide an algorithm that has a lower expected computational complexity for DCOPs even with n-ary constraints. Finally, In most DCOP literature, a one-to-one mapping between a variable and an agent is assumed. However, in real applications, many-to-one mappings are prevalent and can also be beneficial in terms of communication and hardware cost in situations where agents are acting as independent computing units. We consider how to exploit such mapping in order to increase efficiency

    Constructing a unifying theory of dynamic programming DCOP algorithms via the generalized distributive law

    Get PDF
    In this paper we propose a novel message-passing algorithm, the so-called Action-GDL, as an extension to the generalized distributive law (GDL) to ef¿ciently solve DCOPs. Action-GDL provides a unifying perspective of several dynamic programming DCOP algorithms that are based on GDL, such as DPOP and DCPOP algorithms. We empirically show how Action-GDL using a novel distributed post-processing heuristic can outperform DCPOP, and by extension DPOP, even when the latter uses the best arrangement provided by multiple state-of-the-art heuristics.Work funded by IEA (TIN2006-15662-C02-01), AT (CONSOLIDER CSD2007-0022, INGENIO 2010) and EVE (TIN2009-14702-C02-01 and 02). Vinyals is supported by the Spanish Ministry of Education (FPU grant AP2006-04636)Peer Reviewe

    Multi-objective Decentralised Coordination for Teams of Robotic Agents

    No full text
    This thesis introduces two novel coordination mechanisms for a team of multiple autonomous decision makers, represented as autonomous robotic agents. Such techniques aim to improve the capabilities of robotic agents, such as unmanned aerial or ground vehicles (UAVs and UGVs), when deployed in real world operations. In particular, the work reported in this thesis focuses on improving the decision making of teams of such robotic agents when deployed in an unknown, and dynamically changing, environment to perform search and rescue operations for lost targets. This problem is well known and studied within both academia and industry and coordination mechanisms for controlling such teams have been studied in both the robotics and the multi-agent systems communities. Within this setting, our first contribution aims at solves a canonical target search problem, in which a team of UAVs is deployed in an environment to search for a lost target. Specifically, we present a novel decentralised coordination approach for teams of UAVs, based on the max-sum algorithm. In more detail, we represent each agent as a UAV, and study the applicability of the max-sum algorithm, a decentralised approximate message passing algorithm, to coordinate a team of multiple UAVs for target search. We benchmark our approach against three state-of-the-art approaches within a simulation environment. The results show that coordination with the max-sum algorithm out-performs a best response algorithm, which represents the state of the art in the coordination of UAVs for search, by up to 26%, an implicitly coordinated approach, where the coordination arises from the agents making decisions based on a common belief, by up to 34% and finally a non-coordinated approach by up to 68%. These results indicate that the max-sum algorithm has the potential to be applied in complex systems operating in dynamic environments. We then move on to tackle coordination in which the team has more than one objective to achieve (e.g. maximise the covered space of the search area, whilst minimising the amount of energy consumed by each UAV). To achieve this shortcoming, we present, as our second contribution, an extension of the max-sum algorithm to compute bounded solutions for problems involving multiple objectives. More precisely, we develop the bounded multi-objective max-sum algorithm (B-MOMS), a novel decentralised coordination algorithm able to solve problems involving multiple objectives while providing guarantees on the solution it recovers. B-MOMS extends the standard max-sum algorithm to compute bounded approximate solutions to multi-objective decentralised constraint optimisation problems (MO-DCOPs). Moreover, we prove the optimality of B-MOMS in acyclic constraint graphs, and derive problem dependent bounds on its approximation ratio when these graphs contain cycles. Finally, we empirically evaluate its performance on a multi-objective extension of the canonical graph colouring problem. In so doing, we demonstrate that, for the settings we consider, the approximation ratio never exceeds 22, and is typically less than 1.51.5 for less-constrained graphs. Moreover, the runtime required by B-MOMS on the problem instances we considered never exceeds 3030 minutes, even for maximally constrained graphs with one hundred agents
    corecore