128 research outputs found
Magnetic Eigenmaps for the visualization of directed networks
We propose a framework for the visualization of directed networks relying on the eigenfunctions of the magnetic Laplacian, called here Magnetic Eigenmaps. The magnetic Laplacian is a complex deformation of the well-known combinatorial Laplacian. Features such as density of links and directionality patterns are revealed by plotting the phases of the first magnetic eigenvectors. An interpretation of the magnetic eigenvectors is given in connection with the angular synchronization problem. Illustrations of our method are given for both artificial and real networks.The authors thank the following organizations. • EU: The research leading to these results has received
funding from the European Research Council under the European Union’s Seventh Framework Programme
(FP7/2007–2013)/ERC AdG A-DATADRIVE-B (290923). This paper reflects only the authors’ views, the
Union is not liable for any use that may be made of the contained information. • Research Council KUL:
GOA/10/09 MaNet, CoE PFV/10/002 (OPTEC), BIL12/11T; PhD/Postdoc grants. • Flemish Govern ment: – FWO: G.0377.12 (Structured systems), G.088114N (Tensor based data similarity); PhD/Postdoc
grants. – IWT: SBO POM (100031); PhD/Postdoc grants. • iMinds Medical Information Technologies SBO 2014. • Belgian Federal Science Policy Office: IUAP P7/19 (DYSCO, Dynamical systems, control and
optimization, 2012–2017
Complex Networks from Classical to Quantum
Recent progress in applying complex network theory to problems in quantum
information has resulted in a beneficial crossover. Complex network methods
have successfully been applied to transport and entanglement models while
information physics is setting the stage for a theory of complex systems with
quantum information-inspired methods. Novel quantum induced effects have been
predicted in random graphs---where edges represent entangled links---and
quantum computer algorithms have been proposed to offer enhancement for several
network problems. Here we review the results at the cutting edge, pinpointing
the similarities and the differences found at the intersection of these two
fields.Comment: 12 pages, 4 figures, REVTeX 4-1, accepted versio
A Graph Convolution for Signed Directed Graphs
There are several types of graphs according to the nature of the data.
Directed graphs have directions of links, and signed graphs have link types
such as positive and negative. Signed directed graphs are the most complex and
informative that have both. Graph convolutions for signed directed graphs have
not been delivered much yet. Though many graph convolution studies have been
provided, most are designed for undirected or unsigned. In this paper, we
investigate a spectral graph convolution network for signed directed graphs. We
propose a novel complex Hermitian adjacency matrix that encodes graph
information via complex numbers. The complex numbers represent link direction,
sign, and connectivity via the phases and magnitudes. Then, we define a
magnetic Laplacian with the Hermitian matrix and prove its positive
semidefinite property. Finally, we introduce Signed Directed Graph Convolution
Network(SD-GCN). To the best of our knowledge, it is the first spectral
convolution for graphs with signs. Moreover, unlike the existing convolutions
designed for a specific graph type, the proposed model has generality that can
be applied to any graphs, including undirected, directed, or signed. The
performance of the proposed model was evaluated with four real-world graphs. It
outperforms all the other state-of-the-art graph convolutions in the task of
link sign prediction.Comment: Preprint versio
HyperMagNet: A Magnetic Laplacian based Hypergraph Neural Network
In data science, hypergraphs are natural models for data exhibiting multi-way
relations, whereas graphs only capture pairwise. Nonetheless, many proposed
hypergraph neural networks effectively reduce hypergraphs to undirected graphs
via symmetrized matrix representations, potentially losing important
information. We propose an alternative approach to hypergraph neural networks
in which the hypergraph is represented as a non-reversible Markov chain. We use
this Markov chain to construct a complex Hermitian Laplacian matrix - the
magnetic Laplacian - which serves as the input to our proposed hypergraph
neural network. We study HyperMagNet for the task of node classification, and
demonstrate its effectiveness over graph-reduction based hypergraph neural
networks.Comment: 9 pages, 1 figur
Directed Network Laplacians and Random Graph Models
We consider spectral methods that uncover hidden structures in directed
networks. We develop a general framework that allows us to associate methods
based on optimization formulations with maximum likelihood problems on random
graphs. We focus on two existing spectral approaches that build and analyse
Laplacian-style matrices via the minimization of frustration and trophic
incoherence. These algorithms aim to reveal directed periodic and linear
hierarchies, respectively. We show that reordering nodes using the two
algorithms, or mapping them onto a specified lattice, is associated with new
classes of directed random graph models. Using this random graph setting, we
are able to compare the two algorithms on a given network and quantify which
structure is more likely to be present. We illustrate the approach on synthetic
and real networks, and discuss practical implementation issues
Graph Signal Processing: Overview, Challenges and Applications
Research in Graph Signal Processing (GSP) aims to develop tools for
processing data defined on irregular graph domains. In this paper we first
provide an overview of core ideas in GSP and their connection to conventional
digital signal processing. We then summarize recent developments in developing
basic GSP tools, including methods for sampling, filtering or graph learning.
Next, we review progress in several application areas using GSP, including
processing and analysis of sensor network data, biological data, and
applications to image processing and machine learning. We finish by providing a
brief historical perspective to highlight how concepts recently developed in
GSP build on top of prior research in other areas.Comment: To appear, Proceedings of the IEE
Learning Interpretable Features of Graphs and Time Series Data
Graphs and time series are two of the most ubiquitous representations of data of modern time. Representation learning of real-world graphs and time-series data is a key component for the downstream supervised and unsupervised machine learning tasks such as classification, clustering, and visualization. Because of the inherent high dimensionality, representation learning, i.e., low dimensional vector-based embedding of graphs and time-series data is very challenging. Learning interpretable features incorporates transparency of the feature roles, and facilitates downstream analytics tasks in addition to maximizing the performance of the downstream machine learning models. In this thesis, we leveraged tensor (multidimensional array) decomposition for generating interpretable and low dimensional feature space of graphs and time-series data found from three domains: social networks, neuroscience, and heliophysics. We present the theoretical models and empirical results on node embedding of social networks, biomarker embedding on fMRI-based brain networks, and prediction and visualization of multivariate time-series-based flaring and non-flaring solar events
- …