6 research outputs found

    Sampling of graph signals via randomized local aggregations

    Get PDF
    Sampling of signals defined over the nodes of a graph is one of the crucial problems in graph signal processing. While in classical signal processing sampling is a well defined operation, when we consider a graph signal many new challenges arise and defining an efficient sampling strategy is not straightforward. Recently, several works have addressed this problem. The most common techniques select a subset of nodes to reconstruct the entire signal. However, such methods often require the knowledge of the signal support and the computation of the sparsity basis before sampling. Instead, in this paper we propose a new approach to this issue. We introduce a novel technique that combines localized sampling with compressed sensing. We first choose a subset of nodes and then, for each node of the subset, we compute random linear combinations of signal coefficients localized at the node itself and its neighborhood. The proposed method provides theoretical guarantees in terms of reconstruction and stability to noise for any graph and any orthonormal basis, even when the support is not known.Comment: IEEE Transactions on Signal and Information Processing over Networks, 201

    Convolutional Neural Network Architectures for Signals Supported on Graphs

    Full text link
    Two architectures that generalize convolutional neural networks (CNNs) for the processing of signals supported on graphs are introduced. We start with the selection graph neural network (GNN), which replaces linear time invariant filters with linear shift invariant graph filters to generate convolutional features and reinterprets pooling as a possibly nonlinear subsampling stage where nearby nodes pool their information in a set of preselected sample nodes. A key component of the architecture is to remember the position of sampled nodes to permit computation of convolutional features at deeper layers. The second architecture, dubbed aggregation GNN, diffuses the signal through the graph and stores the sequence of diffused components observed by a designated node. This procedure effectively aggregates all components into a stream of information having temporal structure to which the convolution and pooling stages of regular CNNs can be applied. A multinode version of aggregation GNNs is further introduced for operation in large scale graphs. An important property of selection and aggregation GNNs is that they reduce to conventional CNNs when particularized to time signals reinterpreted as graph signals in a circulant graph. Comparative numerical analyses are performed in a source localization application over synthetic and real-world networks. Performance is also evaluated for an authorship attribution problem and text category classification. Multinode aggregation GNNs are consistently the best performing GNN architecture.Comment: Submitted to IEEE Transactions on Signal Processin

    Spectral Estimation for Graph Signals Using Reed-Solomon Decoding

    Get PDF
    Spectral estimation, coding theory and compressed sensing are three important sub-fields of signal processing and information theory. Although these fields developed fairly independently, several important connections between them have been identified. One notable connection between Reed-Solomon(RS) decoding, spectral estimation, and Prony's method of curve fitting was observed by Wolf in 1967. With the recent developments in the area of Graph Signal Processing(GSP), where the signals of interest have high dimensional and irregular structure, a natural and important question to consider is can these connections be extended to spectral estimation for graph signals? Recently, Marques et al, have shown that a bandlimited graph signal that is k-sparse in the Graph Fourier Transform (GFT) domain can be reconstructed from 2k measurements obtained using a dynamic sampling strategy. Inspired by this work, we establish a connection between coding theory and GSP to propose a sparse recovery algorithm for graph signals using methods similar to Berlekamp-Massey algorithm and Forney's algorithm for decoding RS codes. In other words, we develop an equivalent of RS decoding for graph signals. The time complexity of the recovery algorithm is O(k^2) which is independent of the number of nodes N in the graph. The proposed framework has applications in infrastructure networks like communication networks, power grids etc., which involves maximization of the power efficiency of a multiple access communication channel and anomaly detection in sensor networks

    Graph Neural Networks

    Get PDF
    The theme of this dissertation is machine learning on graph data. Graphs are generic models of signal structure that play a crucial role in tackling problems in a diverse array of fields, including smart grids, sensor networks, and robot swarms. Thus, developing machine learning models that can successfully learn from graph data is a promising area of research with high potential impact. This dissertation focuses particularly on the topic of graph neural networks (GNNs) as the main machine learning model for successfully addressing problems involving graph data. GNNs are nonlinear representation maps that exploit the underlying graph structure to improve learning and achieve better performance. One of the key properties of GNNs is that they are local and distributed mathematical models, making them particularly relevant for problems involving physical networks. The overarching objective of this dissertation is to characterize the representation space of GNNs. This entails several research directions. First, we define a mathematical framework that provides the general tools and lays the groundwork for the analysis and design of concrete GNN models. Second, we derive fundamental properties and theoretical insights that serve as a foundation for understanding the success observed when employing GNNs in practical problems involving graph data. Third, we explore new application domains that are naturally suited for the use of GNNs based on the properties that these exhibit. We leverage graph signal processing (GSP) and its key concepts of graph filtering and graph frequency domain to provide a general mathematical framework for characterizing GNNs. We derive the properties of permutation equivariance and stability to perturbations of the graph support and use these to explain the improved performance of GNNs over linear graph filers. We also show how these two properties help explain the scalability and transferability of GNNs. We explore the use of GNNs in learning decentralized controllers and showcase their success in the problem of flocking
    corecore