77,249 research outputs found

    Multi-hop Diffusion LMS for Energy-constrained Distributed Estimation

    Full text link
    We propose a multi-hop diffusion strategy for a sensor network to perform distributed least mean-squares (LMS) estimation under local and network-wide energy constraints. At each iteration of the strategy, each node can combine intermediate parameter estimates from nodes other than its physical neighbors via a multi-hop relay path. We propose a rule to select combination weights for the multi-hop neighbors, which can balance between the transient and the steady-state network mean-square deviations (MSDs). We study two classes of networks: simple networks with a unique transmission path from one node to another, and arbitrary networks utilizing diffusion consultations over at most two hops. We propose a method to optimize each node's information neighborhood subject to local energy budgets and a network-wide energy budget for each diffusion iteration. This optimization requires the network topology, and the noise and data variance profiles of each node, and is performed offline before the diffusion process. In addition, we develop a fully distributed and adaptive algorithm that approximately optimizes the information neighborhood of each node with only local energy budget constraints in the case where diffusion consultations are performed over at most a predefined number of hops. Numerical results suggest that our proposed multi-hop diffusion strategy achieves the same steady-state MSD as the existing one-hop adapt-then-combine diffusion algorithm but with a lower energy budget.Comment: 14 pages, 12 figures. Submitted for publicatio

    A Multitask Diffusion Strategy with Optimized Inter-Cluster Cooperation

    Full text link
    We consider a multitask estimation problem where nodes in a network are divided into several connected clusters, with each cluster performing a least-mean-squares estimation of a different random parameter vector. Inspired by the adapt-then-combine diffusion strategy, we propose a multitask diffusion strategy whose mean stability can be ensured whenever individual nodes are stable in the mean, regardless of the inter-cluster cooperation weights. In addition, the proposed strategy is able to achieve an asymptotically unbiased estimation, when the parameters have same mean. We also develop an inter-cluster cooperation weights selection scheme that allows each node in the network to locally optimize its inter-cluster cooperation weights. Numerical results demonstrate that our approach leads to a lower average steady-state network mean-square deviation, compared with using weights selected by various other commonly adopted methods in the literature.Comment: 30 pages, 8 figures, submitted to IEEE Journal of Selected Topics in Signal Processin

    Self-weighted Multiple Kernel Learning for Graph-based Clustering and Semi-supervised Classification

    Full text link
    Multiple kernel learning (MKL) method is generally believed to perform better than single kernel method. However, some empirical studies show that this is not always true: the combination of multiple kernels may even yield an even worse performance than using a single kernel. There are two possible reasons for the failure: (i) most existing MKL methods assume that the optimal kernel is a linear combination of base kernels, which may not hold true; and (ii) some kernel weights are inappropriately assigned due to noises and carelessly designed algorithms. In this paper, we propose a novel MKL framework by following two intuitive assumptions: (i) each kernel is a perturbation of the consensus kernel; and (ii) the kernel that is close to the consensus kernel should be assigned a large weight. Impressively, the proposed method can automatically assign an appropriate weight to each kernel without introducing additional parameters, as existing methods do. The proposed framework is integrated into a unified framework for graph-based clustering and semi-supervised classification. We have conducted experiments on multiple benchmark datasets and our empirical results verify the superiority of the proposed framework.Comment: Accepted by IJCAI 2018, Code is availabl

    Joint Centrality Distinguishes Optimal Leaders in Noisy Networks

    Full text link
    We study the performance of a network of agents tasked with tracking an external unknown signal in the presence of stochastic disturbances and under the condition that only a limited subset of agents, known as leaders, can measure the signal directly. We investigate the optimal leader selection problem for a prescribed maximum number of leaders, where the optimal leader set minimizes total system error defined as steady-state variance about the external signal. In contrast to previously established greedy algorithms for optimal leader selection, our results rely on an expression of total system error in terms of properties of the underlying network graph. We demonstrate that the performance of any given set of leaders depends on their influence as determined by a new graph measure of centrality of a set. We define the joint  centralityjoint \; centrality of a set of nodes in a network graph such that a leader set with maximal joint centrality is an optimal leader set. In the case of a single leader, we prove that the optimal leader is the node with maximal information centrality. In the case of multiple leaders, we show that the nodes in the optimal leader set balance high information centrality with a coverage of the graph. For special cases of graphs, we solve explicitly for optimal leader sets. We illustrate with examples.Comment: Conditionally accepted to IEEE TCN
    • …
    corecore