35 research outputs found
Graph learning under spectral sparsity constraints
Graph inference plays an essential role in machine learning, pattern
recognition, and classification. Signal processing based approaches in
literature generally assume some variational property of the observed data on
the graph. We make a case for inferring graphs on which the observed data has
high variation. We propose a signal processing based inference model that
allows for wideband frequency variation in the data and propose an algorithm
for graph inference. The proposed inference algorithm consists of two steps: 1)
learning orthogonal eigenvectors of a graph from the data; 2) recovering the
adjacency matrix of the graph topology from the given graph eigenvectors. The
first step is solved by an iterative algorithm with a closed-form solution. In
the second step, the adjacency matrix is inferred from the eigenvectors by
solving a convex optimization problem. Numerical results on synthetic data show
the proposed inference algorithm can effectively capture the meaningful graph
topology from observed data under the wideband assumption
Fast Graph Convolutional Recurrent Neural Networks
This paper proposes a Fast Graph Convolutional Neural Network (FGRNN)
architecture to predict sequences with an underlying graph structure. The
proposed architecture addresses the limitations of the standard recurrent
neural network (RNN), namely, vanishing and exploding gradients, causing
numerical instabilities during training. State-of-the-art architectures that
combine gated RNN architectures, such as Long Short-Term Memory (LSTM) and
Gated Recurrent Unit (GRU) with graph convolutions are known to improve the
numerical stability during the training phase, but at the expense of the model
size involving a large number of training parameters. FGRNN addresses this
problem by adding a weighted residual connection with only two extra training
parameters as compared to the standard RNN. Numerical experiments on the real
3D point cloud dataset corroborates the proposed architecture.Comment: 5 pages.Submitted to Asilomar Conference on Signals, Systems, and
Computer