677 research outputs found
Solving DCOPs with Distributed Large Neighborhood Search
The field of Distributed Constraint Optimization has gained momentum in
recent years, thanks to its ability to address various applications related to
multi-agent cooperation. Nevertheless, solving Distributed Constraint
Optimization Problems (DCOPs) optimally is NP-hard. Therefore, in large-scale,
complex applications, incomplete DCOP algorithms are necessary. Current
incomplete DCOP algorithms suffer of one or more of the following limitations:
they (a) find local minima without providing quality guarantees; (b) provide
loose quality assessment; or (c) are unable to benefit from the structure of
the problem, such as domain-dependent knowledge and hard constraints.
Therefore, capitalizing on strategies from the centralized constraint solving
community, we propose a Distributed Large Neighborhood Search (D-LNS) framework
to solve DCOPs. The proposed framework (with its novel repair phase) provides
guarantees on solution quality, refining upper and lower bounds during the
iterative process, and can exploit domain-dependent structures. Our
experimental results show that D-LNS outperforms other incomplete DCOP
algorithms on both structured and unstructured problem instances
AED: An Anytime Evolutionary DCOP Algorithm
Evolutionary optimization is a generic population-based metaheuristic that
can be adapted to solve a wide variety of optimization problems and has proven
very effective for combinatorial optimization problems. However, the potential
of this metaheuristic has not been utilized in Distributed Constraint
Optimization Problems (DCOPs), a well-known class of combinatorial optimization
problems prevalent in Multi-Agent Systems. In this paper, we present a novel
population-based algorithm, Anytime Evolutionary DCOP (AED), that uses
evolutionary optimization to solve DCOPs. In AED, the agents cooperatively
construct an initial set of random solutions and gradually improve them through
a new mechanism that considers an optimistic approximation of local benefits.
Moreover, we present a new anytime update mechanism for AED that identifies the
best among a distributed set of candidate solutions and notifies all the agents
when a new best is found. In our theoretical analysis, we prove that AED is
anytime. Finally, we present empirical results indicating AED outperforms the
state-of-the-art DCOP algorithms in terms of solution quality.Comment: 9 pages, 6 figures, 2 tables. Appeared in the proceedings of the 19th
International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS
2020
A generic domain pruning technique for GDL-based DCOP algorithms in cooperative multi-agent systems
Generalized Distributive Law (GDL) based message passing algorithms, such as Max-Sum and Bounded Max-Sum, are often used to solve distributed constraint optimization problems in cooperative multi-agent systems (MAS). However, scalability becomes a challenge when these algorithms have to deal with constraint functions with high arity or variables with a large domain size. In either case, the ensuing exponential growth of search space can make such algorithms computationally infeasible in practice. To address this issue, we develop a generic domain pruning technique that enables these algorithms to be effectively applied to larger and more complex problems. We theoretically prove that the pruned search space obtained by our approach does not affect the outcome of the algorithms. Moreover, our empirical evaluation illustrates a significant reduction of the search space, ranging from 33% to 81%, without affecting the solution quality of the algorithms, compared to the state-of-the-art
Reactive approach for automating exploration and exploitation in ant colony optimization
Ant colony optimization (ACO) algorithms can be used to solve nondeterministic polynomial hard problems. Exploration and exploitation are the main mechanisms in controlling search within the ACO. Reactive search is an alternative technique to maintain the dynamism of the mechanics. However, ACO-based reactive search technique has three (3) problems. First, the memory model to record previous search regions did not completely transfer the neighborhood structures to the next iteration which leads to arbitrary restart and premature local search. Secondly, the exploration indicator is not robust due to the difference of magnitudes in distance matrices for the current population. Thirdly, the parameter control techniques that utilize exploration indicators in their feedback process do not consider the problem of indicator robustness. A reactive ant colony optimization (RACO) algorithm has been proposed to overcome the limitations of the reactive search. RACO consists of three main components. The first component is a reactive max-min ant system algorithm for recording the neighborhood structures. The second component is a statistical machine learning mechanism named ACOustic to produce a robust exploration indicator. The third component is the ACO-based adaptive parameter selection algorithm to solve the parameterization problem which relies on quality, exploration and unified criteria in assigning rewards to promising parameters. The performance of RACO is evaluated on traveling salesman and quadratic assignment problems and compared with eight metaheuristics techniques in terms of success rate, Wilcoxon signed-rank, Chi-square and relative percentage deviation. Experimental results showed that the performance of RACO is superior than the eight (8) metaheuristics techniques which confirmed that RACO can be used as a new direction for solving optimization problems. RACO can be used in providing a dynamic exploration and exploitation mechanism, setting a parameter value which allows an efficient search, describing the amount of exploration an ACO algorithm performs and detecting stagnation situations
- …