8 research outputs found

    Diffusion Adaptation over Networks under Imperfect Information Exchange and Non-stationary Data

    Full text link
    Adaptive networks rely on in-network and collaborative processing among distributed agents to deliver enhanced performance in estimation and inference tasks. Information is exchanged among the nodes, usually over noisy links. The combination weights that are used by the nodes to fuse information from their neighbors play a critical role in influencing the adaptation and tracking abilities of the network. This paper first investigates the mean-square performance of general adaptive diffusion algorithms in the presence of various sources of imperfect information exchanges, quantization errors, and model non-stationarities. Among other results, the analysis reveals that link noise over the regression data modifies the dynamics of the network evolution in a distinct way, and leads to biased estimates in steady-state. The analysis also reveals how the network mean-square performance is dependent on the combination weights. We use these observations to show how the combination weights can be optimized and adapted. Simulation results illustrate the theoretical findings and match well with theory.Comment: 36 pages, 7 figures, to appear in IEEE Transactions on Signal Processing, June 201

    Diffusion Strategies Outperform Consensus Strategies for Distributed Estimation over Adaptive Networks

    Full text link
    Adaptive networks consist of a collection of nodes with adaptation and learning abilities. The nodes interact with each other on a local level and diffuse information across the network to solve estimation and inference tasks in a distributed manner. In this work, we compare the mean-square performance of two main strategies for distributed estimation over networks: consensus strategies and diffusion strategies. The analysis in the paper confirms that under constant step-sizes, diffusion strategies allow information to diffuse more thoroughly through the network and this property has a favorable effect on the evolution of the network: diffusion networks are shown to converge faster and reach lower mean-square deviation than consensus networks, and their mean-square stability is insensitive to the choice of the combination weights. In contrast, and surprisingly, it is shown that consensus networks can become unstable even if all the individual nodes are stable and able to solve the estimation task on their own. When this occurs, cooperation over the network leads to a catastrophic failure of the estimation task. This phenomenon does not occur for diffusion networks: we show that stability of the individual nodes always ensures stability of the diffusion network irrespective of the combination topology. Simulation results support the theoretical findings.Comment: 37 pages, 7 figures, To appear in IEEE Transactions on Signal Processing, 201

    Robust Distributed Parameter Estimation in Wireless Sensor Networks

    Get PDF
    abstract: Fully distributed wireless sensor networks (WSNs) without fusion center have advantages such as scalability in network size and energy efficiency in communications. Each sensor shares its data only with neighbors and then achieves global consensus quantities by in-network processing. This dissertation considers robust distributed parameter estimation methods, seeking global consensus on parameters of adaptive learning algorithms and statistical quantities. Diffusion adaptation strategy with nonlinear transmission is proposed. The nonlinearity was motivated by the necessity for bounded transmit power, as sensors need to iteratively communicate each other energy-efficiently. Despite the nonlinearity, it is shown that the algorithm performs close to the linear case with the added advantage of power savings. This dissertation also discusses convergence properties of the algorithm in the mean and the mean-square sense. Often, average is used to measure central tendency of sensed data over a network. When there are outliers in the data, however, average can be highly biased. Alternative choices of robust metrics against outliers are median, mode, and trimmed mean. Quantiles generalize the median, and they also can be used for trimmed mean. Consensus-based distributed quantile estimation algorithm is proposed and applied for finding trimmed-mean, median, maximum or minimum values, and identification of outliers through simulation. It is shown that the estimated quantities are asymptotically unbiased and converges toward the sample quantile in the mean-square sense. Step-size sequences with proper decay rates are also discussed for convergence analysis. Another measure of central tendency is a mode which represents the most probable value and also be robust to outliers and other contaminations in data. The proposed distributed mode estimation algorithm achieves a global mode by recursively shifting conditional mean of the measurement data until it converges to stationary points of estimated density function. It is also possible to estimate the mode by utilizing grid vector as well as kernel density estimator. The densities are estimated at each grid point, while the points are updated until they converge to a global mode.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201
    corecore