366,941 research outputs found
Bounded Decentralised Coordination over Multiple Objectives
We propose the bounded multi-objective max-sum algorithm (B-MOMS), the first decentralised coordination algorithm for multi-objective optimisation problems. B-MOMS extends the max-sum message-passing algorithm for decentralised coordination to compute bounded approximate solutions to multi-objective decentralised constraint optimisation problems (MO-DCOPs). Specifically, we prove the optimality of B-MOMS in acyclic constraint graphs, and derive problem dependent bounds on its approximation ratio when these graphs contain cycles. Furthermore, we empirically evaluate its performance on a multi-objective extension of the canonical graph colouring problem. In so doing, we demonstrate that, for the settings we consider, the approximation ratio never exceeds 2, and is typically less than 1.5 for less-constrained graphs. Moreover, the runtime required by B-MOMS on the problem instances we considered never exceeds 30 minutes, even for maximally constrained graphs with agents. Thus, B-MOMS brings the problem of multi-objective optimisation well within the boundaries of the limited capabilities of embedded agents
Adaptive Ranking Based Constraint Handling for Explicitly Constrained Black-Box Optimization
A novel explicit constraint handling technique for the covariance matrix
adaptation evolution strategy (CMA-ES) is proposed. The proposed constraint
handling exhibits two invariance properties. One is the invariance to arbitrary
element-wise increasing transformation of the objective and constraint
functions. The other is the invariance to arbitrary affine transformation of
the search space. The proposed technique virtually transforms a constrained
optimization problem into an unconstrained optimization problem by considering
an adaptive weighted sum of the ranking of the objective function values and
the ranking of the constraint violations that are measured by the Mahalanobis
distance between each candidate solution to its projection onto the boundary of
the constraints. Simulation results are presented and show that the CMA-ES with
the proposed constraint handling exhibits the affine invariance and performs
similarly to the CMA-ES on unconstrained counterparts.Comment: 9 page
Constrained Consensus
We present distributed algorithms that can be used by multiple agents to
align their estimates with a particular value over a network with time-varying
connectivity. Our framework is general in that this value can represent a
consensus value among multiple agents or an optimal solution of an optimization
problem, where the global objective function is a combination of local agent
objective functions. Our main focus is on constrained problems where the
estimate of each agent is restricted to lie in a different constraint set.
To highlight the effects of constraints, we first consider a constrained
consensus problem and present a distributed ``projected consensus algorithm''
in which agents combine their local averaging operation with projection on
their individual constraint sets. This algorithm can be viewed as a version of
an alternating projection method with weights that are varying over time and
across agents. We establish convergence and convergence rate results for the
projected consensus algorithm. We next study a constrained optimization problem
for optimizing the sum of local objective functions of the agents subject to
the intersection of their local constraint sets. We present a distributed
``projected subgradient algorithm'' which involves each agent performing a
local averaging operation, taking a subgradient step to minimize its own
objective function, and projecting on its constraint set. We show that, with an
appropriately selected stepsize rule, the agent estimates generated by this
algorithm converge to the same optimal solution for the cases when the weights
are constant and equal, and when the weights are time-varying but all agents
have the same constraint set.Comment: 35 pages. Included additional results, removed two subsections, added
references, fixed typo
Online Knapsack Problem under Expected Capacity Constraint
Online knapsack problem is considered, where items arrive in a sequential
fashion that have two attributes; value and weight. Each arriving item has to
be accepted or rejected on its arrival irrevocably. The objective is to
maximize the sum of the value of the accepted items such that the sum of their
weights is below a budget/capacity. Conventionally a hard budget/capacity
constraint is considered, for which variety of results are available. In modern
applications, e.g., in wireless networks, data centres, cloud computing, etc.,
enforcing the capacity constraint in expectation is sufficient. With this
motivation, we consider the knapsack problem with an expected capacity
constraint. For the special case of knapsack problem, called the secretary
problem, where the weight of each item is unity, we propose an algorithm whose
probability of selecting any one of the optimal items is equal to and
provide a matching lower bound. For the general knapsack problem, we propose an
algorithm whose competitive ratio is shown to be that is significantly
better than the best known competitive ratio of for the knapsack
problem with the hard capacity constraint.Comment: To appear in IEEE INFOCOM 2018, April 2018, Honolulu H
Distributed Multi-Agent Optimization with State-Dependent Communication
We study distributed algorithms for solving global optimization problems in
which the objective function is the sum of local objective functions of agents
and the constraint set is given by the intersection of local constraint sets of
agents. We assume that each agent knows only his own local objective function
and constraint set, and exchanges information with the other agents over a
randomly varying network topology to update his information state. We assume a
state-dependent communication model over this topology: communication is
Markovian with respect to the states of the agents and the probability with
which the links are available depends on the states of the agents. In this
paper, we study a projected multi-agent subgradient algorithm under
state-dependent communication. The algorithm involves each agent performing a
local averaging to combine his estimate with the other agents' estimates,
taking a subgradient step along his local objective function, and projecting
the estimates on his local constraint set. The state-dependence of the
communication introduces significant challenges and couples the study of
information exchange with the analysis of subgradient steps and projection
errors. We first show that the multi-agent subgradient algorithm when used with
a constant stepsize may result in the agent estimates to diverge with
probability one. Under some assumptions on the stepsize sequence, we provide
convergence rate bounds on a "disagreement metric" between the agent estimates.
Our bounds are time-nonhomogeneous in the sense that they depend on the initial
starting time. Despite this, we show that agent estimates reach an almost sure
consensus and converge to the same optimal solution of the global optimization
problem with probability one under different assumptions on the local
constraint sets and the stepsize sequence
- …