74 research outputs found

    Proximal Multitask Learning over Networks with Sparsity-inducing Coregularization

    Full text link
    In this work, we consider multitask learning problems where clusters of nodes are interested in estimating their own parameter vector. Cooperation among clusters is beneficial when the optimal models of adjacent clusters have a good number of similar entries. We propose a fully distributed algorithm for solving this problem. The approach relies on minimizing a global mean-square error criterion regularized by non-differentiable terms to promote cooperation among neighboring clusters. A general diffusion forward-backward splitting strategy is introduced. Then, it is specialized to the case of sparsity promoting regularizers. A closed-form expression for the proximal operator of a weighted sum of ℓ1\ell_1-norms is derived to achieve higher efficiency. We also provide conditions on the step-sizes that ensure convergence of the algorithm in the mean and mean-square error sense. Simulations are conducted to illustrate the effectiveness of the strategy

    Distributed Learning for Stochastic Generalized Nash Equilibrium Problems

    Full text link
    This work examines a stochastic formulation of the generalized Nash equilibrium problem (GNEP) where agents are subject to randomness in the environment of unknown statistical distribution. We focus on fully-distributed online learning by agents and employ penalized individual cost functions to deal with coupled constraints. Three stochastic gradient strategies are developed with constant step-sizes. We allow the agents to use heterogeneous step-sizes and show that the penalty solution is able to approach the Nash equilibrium in a stable manner within O(ÎŒmax)O(\mu_\text{max}), for small step-size value ÎŒmax\mu_\text{max} and sufficiently large penalty parameters. The operation of the algorithm is illustrated by considering the network Cournot competition problem

    Distributed Coupled Multi-Agent Stochastic Optimization

    Full text link
    This work develops effective distributed strategies for the solution of constrained multi-agent stochastic optimization problems with coupled parameters across the agents. In this formulation, each agent is influenced by only a subset of the entries of a global parameter vector or model, and is subject to convex constraints that are only known locally. Problems of this type arise in several applications, most notably in disease propagation models, minimum-cost flow problems, distributed control formulations, and distributed power system monitoring. This work focuses on stochastic settings, where a stochastic risk function is associated with each agent and the objective is to seek the minimizer of the aggregate sum of all risks subject to a set of constraints. Agents are not aware of the statistical distribution of the data and, therefore, can only rely on stochastic approximations in their learning strategies. We derive an effective distributed learning strategy that is able to track drifts in the underlying parameter model. A detailed performance and stability analysis is carried out showing that the resulting coupled diffusion strategy converges at a linear rate to an O(ÎŒ)−O(\mu)-neighborhood of the true penalized optimizer

    Networked signal and information processing

    Get PDF
    The article reviews significant advances in networked signal and information processing, which have enabled in the last 25 years extending decision making and inference, optimization, control, and learning to the increasingly ubiquitous environments of distributed agents. As these interacting agents cooperate, new collective behaviors emerge from local decisions and actions. Moreover, and significantly, theory and applications show that networked agents, through cooperation and sharing, are able to match the performance of cloud or federated solutions, while offering the potential for improved privacy, increasing resilience, and saving resources

    Data-Reserved Periodic Diffusion LMS With Low Communication Cost Over Networks

    Get PDF
    In this paper, we analyze diffusion strategies in which all nodes attempt to estimate a common vector parameter for achieving distributed estimation in adaptive networks. Under diffusion strategies, each node essentially needs to share processed data with predefined neighbors. Although the use of internode communication has contributed significantly to improving convergence performance based on diffusion, such communications consume a huge quantity of power in data transmission. In developing low-power consumption diffusion strategies, it is very important to reduce the communication cost without significant degradation of convergence performance. For that purpose, we propose a data-reserved periodic diffusion least-mean-squares (LMS) algorithm in which each node updates and transmits an estimate periodically while reserving its measurement data even during non-update time. By applying these reserved data in an adaptation step at update time, the proposed algorithm mitigates the decline in convergence speed incurred by most conventional periodic schemes. For a period p, the total cost of communication is reduced to a factor of 1/p relative to the conventional adapt-then-combine (ATC) diffusion LMS algorithm. The loss of combination steps in this process leads naturally to a slight increase in the steady-state error as the period p increases, as is theoretically confirmed through mathematical analysis. We also prove an interesting property of the proposed algorithm, namely, that it suffers less degradation of the steady-state error than the conventional diffusion in a noisy communication environment. Experimental results show that the proposed algorithm outperforms related conventional algorithms and, in particular, outperforms ATC diffusion LMS over a network with noisy links.11Ysciescopu

    Networked Signal and Information Processing

    Full text link
    The article reviews significant advances in networked signal and information processing, which have enabled in the last 25 years extending decision making and inference, optimization, control, and learning to the increasingly ubiquitous environments of distributed agents. As these interacting agents cooperate, new collective behaviors emerge from local decisions and actions. Moreover, and significantly, theory and applications show that networked agents, through cooperation and sharing, are able to match the performance of cloud or federated solutions, while offering the potential for improved privacy, increasing resilience, and saving resources
    • 

    corecore