9 research outputs found

    Technical Report: Distributed Asynchronous Large-Scale Mixed-Integer Linear Programming via Saddle Point Computation

    Full text link
    We solve large-scale mixed-integer linear programs (MILPs) via distributed asynchronous saddle point computation. This is motivated by the MILPs being able to model problems in multi-agent autonomy, e.g., task assignment problems and trajectory planning with collision avoidance constraints in multi-robot systems. To solve a MILP, we relax it with a nonlinear program approximation whose accuracy tightens as the number of agents increases relative to the number of coupled constraints. Next, we form an equivalent Lagrangian saddle point problem, and then regularize the Lagrangian in both the primal and dual spaces to create a regularized Lagrangian that is strongly-convex-strongly-concave. We then develop a parallelized algorithm to compute saddle points of the regularized Lagrangian. This algorithm partitions problems into blocks, which are either scalars or sub-vectors of the primal or dual decision variables, and it is shown to tolerate asynchrony in the computations and communications of primal and dual variables. Suboptimality bounds and convergence rates are presented for convergence to a saddle point. The suboptimality bound includes (i) the regularization error induced by regularizing the Lagrangian and (ii) the suboptimality gap between solutions to the original MILP and its relaxed form. Simulation results illustrate these theoretical developments in practice, and show that relaxation and regularization together have only a mild impact on the quality of solution obtained.Comment: 14 pages, 2 figure

    Towards Totally Asynchronous Primal-Dual Convex Optimization in Blocks

    Full text link
    We present a parallelized primal-dual algorithm for solving constrained convex optimization problems. The algorithm is "block-based," in that vectors of primal and dual variables are partitioned into blocks, each of which is updated only by a single processor. We consider four possible forms of asynchrony: in updates to primal variables, updates to dual variables, communications of primal variables, and communications of dual variables. We explicitly construct a family of counterexamples to rule out permitting asynchronous communication of dual variables, though the other forms of asynchrony are permitted, all without requiring bounds on delays. A first-order update law is developed and shown to be robust to asynchrony. We then derive convergence rates to a Lagrangian saddle point in terms of the operations agents execute, without specifying any timing or pattern with which they must be executed. These convergence rates contain a synchronous algorithm as a special case and are used to quantify an "asynchrony penalty." Numerical results illustrate these developments

    Totally Asynchronous Primal-Dual Convex Optimization in Blocks

    Full text link
    We present a parallelized primal-dual algorithm for solving constrained convex optimization problems. The algorithm is "block-based," in that vectors of primal and dual variables are partitioned into blocks, each of which is updated only by a single processor. We consider four possible forms of asynchrony: in updates to primal variables, updates to dual variables, communications of primal variables, and communications of dual variables. We construct a family of explicit counterexamples to show the need to eliminate asynchronous communication of dual variables, though the other forms of asynchrony are permitted, all without requiring bounds on delays. A first-order primal-dual update law is developed and shown to be robust to asynchrony. We then derive convergence rates to a Lagrangian saddle point in terms of the operations agents execute, without specifying any timing or pattern with which they must be executed. These convergence rates include an "asynchrony penalty" that we quantify and present ways to mitigate. Numerical results illustrate these developments.Comment: arXiv admin note: text overlap with arXiv:2004.0514

    Asynchronous Multiagent Primal-Dual Optimization

    No full text
    corecore