101,507 research outputs found
Adaptive Federated Minimax Optimization with Lower complexities
Federated learning is a popular distributed and privacy-preserving machine
learning paradigm. Meanwhile, minimax optimization, as an effective
hierarchical optimization, is widely applied in machine learning. Recently,
some federated optimization methods have been proposed to solve the distributed
minimax problems. However, these federated minimax methods still suffer from
high gradient and communication complexities. Meanwhile, few algorithm focuses
on using adaptive learning rate to accelerate algorithms. To fill this gap, in
the paper, we study a class of nonconvex minimax optimization, and propose an
efficient adaptive federated minimax optimization algorithm (i.e., AdaFGDA) to
solve these distributed minimax problems. Specifically, our AdaFGDA builds on
the momentum-based variance reduced and local-SGD techniques, and it can
flexibly incorporate various adaptive learning rates by using the unified
adaptive matrix. Theoretically, we provide a solid convergence analysis
framework for our AdaFGDA algorithm under non-i.i.d. setting. Moreover, we
prove our algorithms obtain lower gradient (i.e., stochastic first-order
oracle, SFO) complexity of with lower communication
complexity of in finding -stationary point
of the nonconvex minimax problems. Experimentally, we conduct some experiments
on the deep AUC maximization and robust neural network training tasks to verify
efficiency of our algorithms.Comment: Submitted to AISTATS-202
Resilient Distributed Optimization Algorithms for Resource Allocation
Distributed algorithms provide flexibility over centralized algorithms for
resource allocation problems, e.g., cyber-physical systems. However, the
distributed nature of these algorithms often makes the systems susceptible to
man-in-the-middle attacks, especially when messages are transmitted between
price-taking agents and a central coordinator. We propose a resilient strategy
for distributed algorithms under the framework of primal-dual distributed
optimization. We formulate a robust optimization model that accounts for
Byzantine attacks on the communication channels between agents and coordinator.
We propose a resilient primal-dual algorithm using state-of-the-art robust
statistics methods. The proposed algorithm is shown to converge to a
neighborhood of the robust optimization model, where the neighborhood's radius
is proportional to the fraction of attacked channels.Comment: 15 pages, 1 figure, accepted to CDC 201
- …