1,033 research outputs found

    Max-Weight Revisited: Sequences of Non-Convex Optimisations Solving Convex Optimisations

    Get PDF
    We investigate the connections between max-weight approaches and dual subgradient methods for convex optimisation. We find that strong connections exist and we establish a clean, unifying theoretical framework that includes both max-weight and dual subgradient approaches as special cases. Our analysis uses only elementary methods, and is not asymptotic in nature. It also allows us to establish an explicit and direct connection between discrete queue occupancies and Lagrange multipliers.Comment: convex optimisation, max-weight scheduling, backpressure, subgradient method

    Accelerated Backpressure Algorithm

    Full text link
    We develop an Accelerated Back Pressure (ABP) algorithm using Accelerated Dual Descent (ADD), a distributed approximate Newton-like algorithm that only uses local information. Our construction is based on writing the backpressure algorithm as the solution to a network feasibility problem solved via stochastic dual subgradient descent. We apply stochastic ADD in place of the stochastic gradient descent algorithm. We prove that the ABP algorithm guarantees stable queues. Our numerical experiments demonstrate a significant improvement in convergence rate, especially when the packet arrival statistics vary over time.Comment: 9 pages, 4 figures. A version of this work with significantly extended proofs is being submitted for journal publicatio

    First-Order Methods for Nonsmooth Nonconvex Functional Constrained Optimization with or without Slater Points

    Full text link
    Constrained optimization problems where both the objective and constraints may be nonsmooth and nonconvex arise across many learning and data science settings. In this paper, we show a simple first-order method finds a feasible, ϵ\epsilon-stationary point at a convergence rate of O(ϵ−4)O(\epsilon^{-4}) without relying on compactness or Constraint Qualification (CQ). When CQ holds, this convergence is measured by approximately satisfying the Karush-Kuhn-Tucker conditions. When CQ fails, we guarantee the attainment of weaker Fritz-John conditions. As an illustrative example, our method stably converges on piecewise quadratic SCAD regularized problems despite frequent violations of constraint qualification. The considered algorithm is similar to those of "Quadratically regularized subgradient methods for weakly convex optimization with weakly convex constraints" by Ma et al. and "Stochastic first-order methods for convex and nonconvex functional constrained optimization" by Boob et al. (whose guarantees further assume compactness and CQ), iteratively taking inexact proximal steps, computed via an inner loop applying a switching subgradient method to a strongly convex constrained subproblem. Our non-Lipschitz analysis of the switching subgradient method appears to be new and may be of independent interest
    • …
    corecore