21 research outputs found
`Why didn't you allocate this task to them?' Negotiation-Aware Task Allocation and Contrastive Explanation Generation
Task-allocation is an important problem in multi-agent systems. It becomes
more challenging when the team-members are humans with imperfect knowledge
about their teammates' costs and the overall performance metric. While
distributed task-allocation methods let the team-members engage in iterative
dialog to reach a consensus, the process can take a considerable amount of time
and communication. On the other hand, a centralized method that simply outputs
an allocation may result in discontented human team-members who, due to their
imperfect knowledge and limited computation capabilities, perceive the
allocation to be unfair. To address these challenges, we propose a centralized
Artificial Intelligence Task Allocation (AITA) that simulates a negotiation and
produces a negotiation-aware task allocation that is fair. If a team-member is
unhappy with the proposed allocation, we allow them to question the proposed
allocation using a counterfactual. By using parts of the simulated negotiation,
we are able to provide contrastive explanations that providing minimum
information about other's costs to refute their foil. With human studies, we
show that (1) the allocation proposed using our method does indeed appear fair
to the majority, and (2) when a counterfactual is raised, explanations
generated are easy to comprehend and convincing. Finally, we empirically study
the effect of different kinds of incompleteness on the explanation-length and
find that underestimation of a teammate's costs often increases it.Comment: First two authors are equal contributor