146 research outputs found

    Lossy network correlated data gathering with high-resolution coding

    Get PDF
    Sensor networks measuring correlated data are considered, where the task is to gather data from the network nodes to a sink. A specific scenario is addressed, where data at nodes are lossy coded with high-resolution, and the information measured by the nodes has to be reconstructed at the sink within both certain total and individual distortion bounds. The first problem considered is to find the optimal transmission structure and the rate-distortion allocations at the various spatially located nodes, such as to minimize the total power consumption cost of the network, by assuming fixed nodes positions. The optimal transmission structure is the shortest path tree and the problems of rate and distortion allocation separate in the high-resolution case, namely, first the distortion allocation is found as a function of the transmission structure, and second, for a given distortion allocation, the rate allocation is computed. The second problem addressed is the case when the node positions can be chosen, by finding the optimal node placement for two different targets of interest, namely total power minimization and network lifetime maximization. Finally, a node placement solution that provides a tradeoff between the two metrics is proposed

    Network correlated data gathering with explicit communication: NP-completeness and algorithms

    Get PDF
    We consider the problem of correlated data gathering by a network with a sink node and a tree-based communication structure, where the goal is to minimize the total transmission cost of transporting the information collected by the nodes, to the sink node. For source coding of correlated data, we consider a joint entropy-based coding model with explicit communication where coding is simple and the transmission structure optimization is difficult. We first formulate the optimization problem definition in the general case and then we study further a network setting where the entropy conditioning at nodes does not depend on the amount of side information, but only on its availability. We prove that even in this simple case, the optimization problem is NP-hard. We propose some efficient, scalable, and distributed heuristic approximation algorithms for solving this problem and show by numerical simulations that the total transmission cost can be significantly improved over direct transmission or the shortest path tree. We also present an approximation algorithm that provides a tree transmission structure with total cost within a constant factor from the optimal

    Networked Slepian-Wolf: theory, algorithms, and scaling laws

    Get PDF
    Consider a set of correlated sources located at the nodes of a network, and a set of sinks that are the destinations for some of the sources. The minimization of cost functions which are the product of a function of the rate and a function of the path weight is considered, for both the data-gathering scenario, which is relevant in sensor networks, and general traffic matrices, relevant for general networks. The minimization is achieved by jointly optimizing a) the transmission structure, which is shown to consist in general of a superposition of trees, and b) the rate allocation across the source nodes, which is done by Slepian-Wolf coding. The overall minimization can be achieved in two concatenated steps. First, the optimal transmission structure is found, which in general amounts to finding a Steiner tree, and second, the optimal rate allocation is obtained by solving an optimization problem with cost weights determined by the given optimal transmission structure, and with linear constraints given by the Slepian-Wolf rate region. For the case of data gathering, the optimal transmission structure is fully characterized and a closed-form solution for the optimal rate allocation is provided. For the general case of an arbitrary traffic matrix, the problem of finding the optimal transmission structure is NP-complete. For large networks, in some simplified scenarios, the total costs associated with Slepian-Wolf coding and explicit communication (conditional encoding based on explicitly communicated side information) are compared. Finally, the design of decentralized algorithms for the optimal rate allocation is analyzed

    Accurate Graph Filtering in Wireless Sensor Networks

    Get PDF
    Wireless sensor networks (WSNs) are considered as a major technology enabling the Internet of Things (IoT) paradigm. The recent emerging Graph Signal Processing field can also contribute to enabling the IoT by providing key tools, such as graph filters, for processing the data associated with the sensor devices. Graph filters can be performed over WSNs in a distributed manner by means of a certain number of communication exchanges among the nodes. But, WSNs are often affected by interferences and noise, which leads to view these networks as directed, random and time-varying graph topologies. Most of existing works neglect this problem by considering an unrealistic assumption that claims the same probability of link activation in both directions when sending a packet between two neighboring nodes. This work focuses on the problem of operating graph filtering in random asymmetric WSNs. We show first that graph filtering with finite impulse response graph filters (node-invariant and node-variant) requires having equal connectivity probabilities for all the links in order to have an unbiased filtering, which can not be achieved in practice in random WSNs. After this, we characterize the graph filtering error and present an efficient strategy to conduct graph filtering tasks over random WSNs with node-variant graph filters by maximizing accuracy, that is, ensuring a small bias-variance trade-off. In order to enforce the desired accuracy, we optimize the filter coefficients and design a cross-layer distributed scheduling algorithm at the MAC layer. Extensive numerical experiments are presented to show the efficiency of the proposed solution as well as the cross-layer distributed scheduling algorithm for the denoising application.Comment: 15 pages, 8 figures, submitted to IEEE Internet of Things Journa

    An Online Multiple Kernel Parallelizable Learning Scheme

    Get PDF
    Author's accepted manuscript© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.The performance of reproducing kernel Hilbert space-based methods is known to be sensitive to the choice of the reproducing kernel. Choosing an adequate reproducing kernel can be challenging and computationally demanding, especially in data-rich tasks without prior information about the solution domain. In this paper, we propose a learning scheme that scalably combines several single kernel-based online methods to reduce the kernel-selection bias. The proposed learning scheme applies to any task formulated as a regularized empirical risk minimization convex problem. More specifically, our learning scheme is based on a multi-kernel learning formulation that can be applied to widen any single-kernel solution space, thus increasing the possibility of finding higher-performance solutions. In addition, it is parallelizable, allowing for the distribution of the computational load across different computing units. We show experimentally that the proposed learning scheme outperforms the combined single-kernel online methods separately in terms of the cumulative regularized least squares cost metric.acceptedVersio

    An Online Multiple Kernel Parallelizable Learning Scheme

    Full text link
    The performance of reproducing kernel Hilbert space-based methods is known to be sensitive to the choice of the reproducing kernel. Choosing an adequate reproducing kernel can be challenging and computationally demanding, especially in data-rich tasks without prior information about the solution domain. In this paper, we propose a learning scheme that scalably combines several single kernel-based online methods to reduce the kernel-selection bias. The proposed learning scheme applies to any task formulated as a regularized empirical risk minimization convex problem. More specifically, our learning scheme is based on a multi-kernel learning formulation that can be applied to widen any single-kernel solution space, thus increasing the possibility of finding higher-performance solutions. In addition, it is parallelizable, allowing for the distribution of the computational load across different computing units. We show experimentally that the proposed learning scheme outperforms the combined single-kernel online methods separately in terms of the cumulative regularized least squares cost metric.Comment: 5 pages, 2 figure

    Tracking of Quantized Signals Based on Online Kernel Regression

    Get PDF
    Author's accepted manuscript. © 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Kernel-based approaches have achieved noticeable success as non-parametric regression methods under the framework of stochastic optimization. However, most of the kernel-based methods in the literature are not suitable to track sequentially streamed quantized data samples from dynamic environments. This shortcoming occurs mainly for two reasons: first, their poor versatility in tracking variables that may change unpredictably over time, primarily because of their lack of flexibility when choosing a functional cost that best suits the associated regression problem; second, their indifference to the smoothness of the underlying physical signal generating those samples. This work introduces a novel algorithm constituted by an online regression problem that accounts for these two drawbacks and a stochastic proximal method that exploits its structure. In addition, we provide tracking guarantees by analyzing the dynamic regret of our algorithm. Finally, we present some experimental results that support our theoretical analysis and show that our algorithm has a favorable performance compared to the state-of-the-art.acceptedVersio

    Quantization in Graph Convolutional Neural Networks

    Get PDF
    submittedVersio
    corecore