778 research outputs found
Learning Laplacian Matrix in Smooth Graph Signal Representations
The construction of a meaningful graph plays a crucial role in the success of
many graph-based representations and algorithms for handling structured data,
especially in the emerging field of graph signal processing. However, a
meaningful graph is not always readily available from the data, nor easy to
define depending on the application domain. In particular, it is often
desirable in graph signal processing applications that a graph is chosen such
that the data admit certain regularity or smoothness on the graph. In this
paper, we address the problem of learning graph Laplacians, which is equivalent
to learning graph topologies, such that the input data form graph signals with
smooth variations on the resulting topology. To this end, we adopt a factor
analysis model for the graph signals and impose a Gaussian probabilistic prior
on the latent variables that control these signals. We show that the Gaussian
prior leads to an efficient representation that favors the smoothness property
of the graph signals. We then propose an algorithm for learning graphs that
enforces such property and is based on minimizing the variations of the signals
on the learned graph. Experiments on both synthetic and real world data
demonstrate that the proposed graph learning framework can efficiently infer
meaningful graph topologies from signal observations under the smoothness
prior
Structural Analysis of Network Traffic Matrix via Relaxed Principal Component Pursuit
The network traffic matrix is widely used in network operation and
management. It is therefore of crucial importance to analyze the components and
the structure of the network traffic matrix, for which several mathematical
approaches such as Principal Component Analysis (PCA) were proposed. In this
paper, we first argue that PCA performs poorly for analyzing traffic matrix
that is polluted by large volume anomalies, and then propose a new
decomposition model for the network traffic matrix. According to this model, we
carry out the structural analysis by decomposing the network traffic matrix
into three sub-matrices, namely, the deterministic traffic, the anomaly traffic
and the noise traffic matrix, which is similar to the Robust Principal
Component Analysis (RPCA) problem previously studied in [13]. Based on the
Relaxed Principal Component Pursuit (Relaxed PCP) method and the Accelerated
Proximal Gradient (APG) algorithm, we present an iterative approach for
decomposing a traffic matrix, and demonstrate its efficiency and flexibility by
experimental results. Finally, we further discuss several features of the
deterministic and noise traffic. Our study develops a novel method for the
problem of structural analysis of the traffic matrix, which is robust against
pollution of large volume anomalies.Comment: Accepted to Elsevier Computer Network
On the Stability of Graph Convolutional Neural Networks under Edge Rewiring
Graph neural networks are experiencing a surge of popularity within the
machine learning community due to their ability to adapt to non-Euclidean
domains and instil inductive biases. Despite this, their stability, i.e., their
robustness to small perturbations in the input, is not yet well understood.
Although there exists some results showing the stability of graph neural
networks, most take the form of an upper bound on the magnitude of change due
to a perturbation in the graph topology. However, the change in the graph
topology captured in existing bounds tend not to be expressed in terms of
structural properties, limiting our understanding of the model robustness
properties. In this work, we develop an interpretable upper bound elucidating
that graph neural networks are stable to rewiring between high degree nodes.
This bound and further research in bounds of similar type provide further
understanding of the stability properties of graph neural networks.Comment: To appear at the 46th International Conference on Acoustics, Speech
and Signal Processing (ICASSP 2021
Understanding stock market instability via graph auto-encoders
Understanding stock market instability is a key question in financial
management as practitioners seek to forecast breakdowns in asset co-movements
which expose portfolios to rapid and devastating collapses in value. The
structure of these co-movements can be described as a graph where companies are
represented by nodes and edges capture correlations between their price
movements. Learning a timely indicator of co-movement breakdowns (manifested as
modifications in the graph structure) is central in understanding both
financial stability and volatility forecasting. We propose to use the edge
reconstruction accuracy of a graph auto-encoder (GAE) as an indicator for how
spatially homogeneous connections between assets are, which, based on financial
network literature, we use as a proxy to infer market volatility. Our
experiments on the S&P 500 over the 2015-2022 period show that higher GAE
reconstruction error values are correlated with higher volatility. We also show
that out-of-sample autoregressive modeling of volatility is improved by the
addition of the proposed measure. Our paper contributes to the literature of
machine learning in finance particularly in the context of understanding stock
market instability.Comment: Submitted to Glinda workshop of the Neurips 2022 conference Keywords
: Graph Based Learning, Graph Neural Networks, Graph Autoencoder, Stock
Market Information, Volatility Forecastin
Laplacian-regularized graph bandits: algorithms and theoretical analysis
We consider a stochastic linear bandit problem with multiple users, where the relationship between users is captured by an underlying graph and user preferences are represented as smooth signals on the graph. We introduce a novel bandit algorithm where the smoothness prior is imposed via the random-walk graph Laplacian, which leads to a singleuser cumulative regret scaling as Ă(ΚdâT) with time horizon T, feature dimensionality d, and the scalar parameter Κ â (0, 1) that depends on the graph connectivity. This is an improvement over Ă(d,âT) in LinUCB [Li et al., 2010], where user relationship is not taken into account. In terms of network regret (sum of cumulative regret over n users), the proposed algorithm leads to a scaling as Ă(ΚdânT), which is a significant improvement over Ă(ndâT) in the state-of-the-art algorithm Gob.Lin [Cesa-Bianchi et al., 2013]. To improve scalability, we further propose a simplified algorithm with a linear computational complexity with respect to the number of users, while maintaining the same regret. Finally, we present a finite-time analysis on the proposed algorithms, and demonstrate their advantage in comparison with state-of-the-art graph-based bandit algorithms on both synthetic and real-world data
On the impact of sample size in reconstructing graph signals
Reconstructing a signal on a graph from observations on a subset of the vertices is a fundamental problem in the field of graph signal processing. It is often assumed that adding additional observations to an observation set will reduce the expected reconstruction error. We show that under the setting of noisy observation and least-squares reconstruction this is not always the case, characterising the behaviour both theoretically and experimentally
Learning Hypergraphs From Signals With Dual Smoothness Prior
The construction of a meaningful hypergraph topology is the key to processing
signals with high-order relationships that involve more than two entities.
Learning the hypergraph structure from the observed signals to capture the
intrinsic relationships among the entities becomes crucial when a hypergraph
topology is not readily available in the datasets. There are two challenges
that lie at the heart of this problem: 1) how to handle the huge search space
of potential hyperedges, and 2) how to define meaningful criteria to measure
the relationship between the signals observed on nodes and the hypergraph
structure. In this paper, to address the first challenge, we adopt the
assumption that the ideal hypergraph structure can be derived from a learnable
graph structure that captures the pairwise relations within signals. Further,
we propose a hypergraph learning framework with a novel dual smoothness prior
that reveals a mapping between the observed node signals and the hypergraph
structure, whereby each hyperedge corresponds to a subgraph with both node
signal smoothness and edge signal smoothness in the learnable graph structure.
Finally, we conduct extensive experiments to evaluate the proposed framework on
both synthetic and real world datasets. Experiments show that our proposed
framework can efficiently infer meaningful hypergraph topologies from observed
signals.Comment: We have polished the paper and fixed some typos and the correct
number of the target hyperedges is given to the baseline in this versio
- âŠ