188,269 research outputs found

    Diffusion Adaptation Strategies for Distributed Optimization and Learning over Networks

    Full text link
    We propose an adaptive diffusion mechanism to optimize a global cost function in a distributed manner over a network of nodes. The cost function is assumed to consist of a collection of individual components. Diffusion adaptation allows the nodes to cooperate and diffuse information in real-time; it also helps alleviate the effects of stochastic gradient noise and measurement noise through a continuous learning process. We analyze the mean-square-error performance of the algorithm in some detail, including its transient and steady-state behavior. We also apply the diffusion algorithm to two problems: distributed estimation with sparse parameters and distributed localization. Compared to well-studied incremental methods, diffusion methods do not require the use of a cyclic path over the nodes and are robust to node and link failure. Diffusion methods also endow networks with adaptation abilities that enable the individual nodes to continue learning even when the cost function changes with time. Examples involving such dynamic cost functions with moving targets are common in the context of biological networks.Comment: 34 pages, 6 figures, to appear in IEEE Transactions on Signal Processing, 201

    Distributed state estimation in sensor networks with randomly occurring nonlinearities subject to time delays

    Get PDF
    This is the post-print version of the Article. The official published version can be accessed from the links below - Copyright @ 2012 ACM.This article is concerned with a new distributed state estimation problem for a class of dynamical systems in sensor networks. The target plant is described by a set of differential equations disturbed by a Brownian motion and randomly occurring nonlinearities (RONs) subject to time delays. The RONs are investigated here to reflect network-induced randomly occurring regulation of the delayed states on the current ones. Through available measurement output transmitted from the sensors, a distributed state estimator is designed to estimate the states of the target system, where each sensor can communicate with the neighboring sensors according to the given topology by means of a directed graph. The state estimation is carried out in a distributed way and is therefore applicable to online application. By resorting to the Lyapunov functional combined with stochastic analysis techniques, several delay-dependent criteria are established that not only ensure the estimation error to be globally asymptotically stable in the mean square, but also guarantee the existence of the desired estimator gains that can then be explicitly expressed when certain matrix inequalities are solved. A numerical example is given to verify the designed distributed state estimators.This work was supported in part by the National Natural Science Foundation of China under Grants 61028008, 60804028 and 61174136, the Qing Lan Project of Jiangsu Province of China, the Project sponsored by SRF for ROCS of SEM of China, the Engineering and Physical Sciences Research Council (EPSRC) of the UK under Grant GR/S27658/01, the Royal Society of the UK, and the Alexander von Humboldt Foundation of Germany

    Diffusion-Based Adaptive Distributed Detection: Steady-State Performance in the Slow Adaptation Regime

    Full text link
    This work examines the close interplay between cooperation and adaptation for distributed detection schemes over fully decentralized networks. The combined attributes of cooperation and adaptation are necessary to enable networks of detectors to continually learn from streaming data and to continually track drifts in the state of nature when deciding in favor of one hypothesis or another. The results in the paper establish a fundamental scaling law for the steady-state probabilities of miss-detection and false-alarm in the slow adaptation regime, when the agents interact with each other according to distributed strategies that employ small constant step-sizes. The latter are critical to enable continuous adaptation and learning. The work establishes three key results. First, it is shown that the output of the collaborative process at each agent has a steady-state distribution. Second, it is shown that this distribution is asymptotically Gaussian in the slow adaptation regime of small step-sizes. And third, by carrying out a detailed large deviations analysis, closed-form expressions are derived for the decaying rates of the false-alarm and miss-detection probabilities. Interesting insights are gained. In particular, it is verified that as the step-size Ī¼\mu decreases, the error probabilities are driven to zero exponentially fast as functions of 1/Ī¼1/\mu, and that the error exponents increase linearly in the number of agents. It is also verified that the scaling laws governing errors of detection and errors of estimation over networks behave very differently, with the former having an exponential decay proportional to 1/Ī¼1/\mu, while the latter scales linearly with decay proportional to Ī¼\mu. It is shown that the cooperative strategy allows each agent to reach the same detection performance, in terms of detection error exponents, of a centralized stochastic-gradient solution.Comment: The paper will appear in IEEE Trans. Inf. Theor

    Adaptive Graph Signal Processing: Algorithms and Optimal Sampling Strategies

    Full text link
    The goal of this paper is to propose novel strategies for adaptive learning of signals defined over graphs, which are observed over a (randomly time-varying) subset of vertices. We recast two classical adaptive algorithms in the graph signal processing framework, namely, the least mean squares (LMS) and the recursive least squares (RLS) adaptive estimation strategies. For both methods, a detailed mean-square analysis illustrates the effect of random sampling on the adaptive reconstruction capability and the steady-state performance. Then, several probabilistic sampling strategies are proposed to design the sampling probability at each node in the graph, with the aim of optimizing the tradeoff between steady-state performance, graph sampling rate, and convergence rate of the adaptive algorithms. Finally, a distributed RLS strategy is derived and is shown to be convergent to its centralized counterpart. Numerical simulations carried out over both synthetic and real data illustrate the good performance of the proposed sampling and reconstruction strategies for (possibly distributed) adaptive learning of signals defined over graphs.Comment: Submitted to IEEE Transactions on Signal Processing, September 201

    Distributed Clustering and Learning Over Networks

    Full text link
    Distributed processing over networks relies on in-network processing and cooperation among neighboring agents. Cooperation is beneficial when agents share a common objective. However, in many applications agents may belong to different clusters that pursue different objectives. Then, indiscriminate cooperation will lead to undesired results. In this work, we propose an adaptive clustering and learning scheme that allows agents to learn which neighbors they should cooperate with and which other neighbors they should ignore. In doing so, the resulting algorithm enables the agents to identify their clusters and to attain improved learning and estimation accuracy over networks. We carry out a detailed mean-square analysis and assess the error probabilities of Types I and II, i.e., false alarm and mis-detection, for the clustering mechanism. Among other results, we establish that these probabilities decay exponentially with the step-sizes so that the probability of correct clustering can be made arbitrarily close to one.Comment: 47 pages, 6 figure

    Diffusion Strategies Outperform Consensus Strategies for Distributed Estimation over Adaptive Networks

    Full text link
    Adaptive networks consist of a collection of nodes with adaptation and learning abilities. The nodes interact with each other on a local level and diffuse information across the network to solve estimation and inference tasks in a distributed manner. In this work, we compare the mean-square performance of two main strategies for distributed estimation over networks: consensus strategies and diffusion strategies. The analysis in the paper confirms that under constant step-sizes, diffusion strategies allow information to diffuse more thoroughly through the network and this property has a favorable effect on the evolution of the network: diffusion networks are shown to converge faster and reach lower mean-square deviation than consensus networks, and their mean-square stability is insensitive to the choice of the combination weights. In contrast, and surprisingly, it is shown that consensus networks can become unstable even if all the individual nodes are stable and able to solve the estimation task on their own. When this occurs, cooperation over the network leads to a catastrophic failure of the estimation task. This phenomenon does not occur for diffusion networks: we show that stability of the individual nodes always ensures stability of the diffusion network irrespective of the combination topology. Simulation results support the theoretical findings.Comment: 37 pages, 7 figures, To appear in IEEE Transactions on Signal Processing, 201
    • ā€¦
    corecore