2,198 research outputs found
Cloud-Based Centralized/Decentralized Multi-Agent Optimization with Communication Delays
We present and analyze a computational hybrid architecture for performing
multi-agent optimization. The optimization problems under consideration have
convex objective and constraint functions with mild smoothness conditions
imposed on them. For such problems, we provide a primal-dual algorithm
implemented in the hybrid architecture, which consists of a decentralized
network of agents into which centralized information is occasionally injected,
and we establish its convergence properties. To accomplish this, a central
cloud computer aggregates global information, carries out computations of the
dual variables based on this information, and then distributes the updated dual
variables to the agents. The agents update their (primal) state variables and
also communicate among themselves with each agent sharing and receiving state
information with some number of its neighbors. Throughout, communications with
the cloud are not assumed to be synchronous or instantaneous, and communication
delays are explicitly accounted for in the modeling and analysis of the system.
Experimental results are presented to support the theoretical developments
made.Comment: 8 pages, 4 figure
Variational Fair Clustering
We propose a general variational framework of fair clustering, which
integrates an original Kullback-Leibler (KL) fairness term with a large class
of clustering objectives, including prototype or graph based. Fundamentally
different from the existing combinatorial and spectral solutions, our
variational multi-term approach enables to control the trade-off levels between
the fairness and clustering objectives. We derive a general tight upper bound
based on a concave-convex decomposition of our fairness term, its
Lipschitz-gradient property and the Pinsker's inequality. Our tight upper bound
can be jointly optimized with various clustering objectives, while yielding a
scalable solution, with convergence guarantee. Interestingly, at each
iteration, it performs an independent update for each assignment variable.
Therefore, it can be easily distributed for large-scale datasets. This
scalability is important as it enables to explore different trade-off levels
between the fairness and clustering objectives. Unlike spectral relaxation, our
formulation does not require computing its eigenvalue decomposition. We report
comprehensive evaluations and comparisons with state-of-the-art methods over
various fair-clustering benchmarks, which show that our variational formulation
can yield highly competitive solutions in terms of fairness and clustering
objectives.Comment: Accepted to be published in AAAI 2021. The Code is available at:
https://github.com/imtiazziko/Variational-Fair-Clusterin
- …