25,837 research outputs found

    Approximately Truthful Multi-Agent Optimization Using Cloud-Enforced Joint Differential Privacy

    Full text link
    Multi-agent coordination problems often require agents to exchange state information in order to reach some collective goal, such as agreement on a final state value. In some cases, it is feasible that opportunistic agents may deceptively report false state values for their own benefit, e.g., to claim a larger portion of shared resources. Motivated by such cases, this paper presents a multi-agent coordination framework which disincentivizes opportunistic misreporting of state information. This paper focuses on multi-agent coordination problems that can be stated as nonlinear programs, with non-separable constraints coupling the agents. In this setting, an opportunistic agent may be tempted to skew the problem's constraints in its favor to reduce its local cost, and this is exactly the behavior we seek to disincentivize. The framework presented uses a primal-dual approach wherein the agents compute primal updates and a centralized cloud computer computes dual updates. All computations performed by the cloud are carried out in a way that enforces joint differential privacy, which adds noise in order to dilute any agent's influence upon the value of its cost function in the problem. We show that this dilution deters agents from intentionally misreporting their states to the cloud, and present bounds on the possible cost reduction an agent can attain through misreporting its state. This work extends our earlier work on incorporating ordinary differential privacy into multi-agent optimization, and we show that this work can be modified to provide a disincentivize for misreporting states to the cloud. Numerical results are presented to demonstrate convergence of the optimization algorithm under joint differential privacy.Comment: 17 pages, 3 figure
    • …
    corecore