The recent deployment of multi-agent systems in a wide range of scenarios has
enabled the solution of learning problems in a distributed fashion. In this
context, agents are tasked with collecting local data and then cooperatively
train a model, without directly sharing the data. While distributed learning
offers the advantage of preserving agents' privacy, it also poses several
challenges in terms of designing and analyzing suitable algorithms. This work
focuses specifically on the following challenges motivated by practical
implementation: (i) online learning, where the local data change over time;
(ii) asynchronous agent computations; (iii) unreliable and limited
communications; and (iv) inexact local computations. To tackle these
challenges, we introduce the Distributed Operator Theoretical (DOT) version of
the Alternating Direction Method of Multipliers (ADMM), which we call the
DOT-ADMM Algorithm. We prove that it converges with a linear rate for a large
class of convex learning problems (e.g., linear and logistic regression
problems) toward a bounded neighborhood of the optimal time-varying solution,
and characterize how the neighborhood depends on~(i)β(iv). We
corroborate the theoretical analysis with numerical simulations comparing the
DOT-ADMM Algorithm with other state-of-the-art algorithms, showing that only
the proposed algorithm exhibits robustness to (i)--(iv)