199,277 research outputs found
Localizing genuine multiparty entanglement in noisy stabilizer states
Characterizing large noisy multiparty quantum states using genuine multiparty
entanglement is a challenging task. In this paper, we calculate lower bounds of
genuine multiparty entanglement localized over a chosen multiparty subsystem of
multi-qubit stabilizer states in the noiseless and noisy scenario. In the
absence of noise, adopting a graph-based technique, we perform the calculation
for arbitrary graph states as representatives of the stabilizer states, and
show that the graph operations required for the calculation has a polynomial
scaling with the system size. As demonstrations, we compute the localized
genuine multiparty entanglement over subsystems of large graphs having linear,
ladder, and square structures. We also extend the calculation for graph states
subjected to single-qubit Markovian or non-Markovian Pauli noise on all qubits,
and demonstrate, for a specific lower bound of the localizable genuine
multiparty entanglement corresponding to a specific Pauli measurement setup,
the existence of a critical noise strength beyond which all of the post
measured states are biseparable. The calculation is also useful for arbitrary
large stabilizer states under noise due to the local unitary connection between
stabilizer states and graph states. We demonstrate this by considering a toric
code defined on a square lattice, and computing a lower bound of localizable
genuine multiparty entanglement over a non-trivial loop of the code. Similar to
the graph states, we show the existence of the critical noise strength in this
case also, and discuss its interesting features.Comment: 36 pages, 21 figures, 2 table
Hierarchical organization of functional connectivity in the mouse brain: a complex network approach
This paper represents a contribution to the study of the brain functional
connectivity from the perspective of complex networks theory. More
specifically, we apply graph theoretical analyses to provide evidence of the
modular structure of the mouse brain and to shed light on its hierarchical
organization. We propose a novel percolation analysis and we apply our approach
to the analysis of a resting-state functional MRI data set from 41 mice. This
approach reveals a robust hierarchical structure of modules persistent across
different subjects. Importantly, we test this approach against a statistical
benchmark (or null model) which constrains only the distributions of empirical
correlations. Our results unambiguously show that the hierarchical character of
the mouse brain modular structure is not trivially encoded into this
lower-order constraint. Finally, we investigate the modular structure of the
mouse brain by computing the Minimal Spanning Forest, a technique that
identifies subnetworks characterized by the strongest internal correlations.
This approach represents a faster alternative to other community detection
methods and provides a means to rank modules on the basis of the strength of
their internal edges.Comment: 11 pages, 9 figure
Sign and Basis Invariant Networks for Spectral Graph Representation Learning
Many machine learning tasks involve processing eigenvectors derived from
data. Especially valuable are Laplacian eigenvectors, which capture useful
structural information about graphs and other geometric objects. However,
ambiguities arise when computing eigenvectors: for each eigenvector , the
sign flipped is also an eigenvector. More generally, higher dimensional
eigenspaces contain infinitely many choices of basis eigenvectors. These
ambiguities make it a challenge to process eigenvectors and eigenspaces in a
consistent way. In this work we introduce SignNet and BasisNet -- new neural
architectures that are invariant to all requisite symmetries and hence process
collections of eigenspaces in a principled manner. Our networks are universal,
i.e., they can approximate any continuous function of eigenvectors with the
proper invariances. They are also theoretically strong for graph representation
learning -- they can approximate any spectral graph convolution, can compute
spectral invariants that go beyond message passing neural networks, and can
provably simulate previously proposed graph positional encodings. Experiments
show the strength of our networks for molecular graph regression, learning
expressive graph representations, and learning implicit neural representations
on triangle meshes. Our code is available at
https://github.com/cptq/SignNet-BasisNet .Comment: 35 page
Distributed Graph Neural Network Training with Periodic Stale Representation Synchronization
Despite the recent success of Graph Neural Networks, it remains challenging
to train a GNN on large graphs with millions of nodes and billions of edges,
which are prevalent in many graph-based applications. Traditional
sampling-based methods accelerate GNN training by dropping edges and nodes,
which impairs the graph integrity and model performance. Differently,
distributed GNN algorithms accelerate GNN training by utilizing multiple
computing devices and can be classified into two types: "partition-based"
methods enjoy low communication costs but suffer from information loss due to
dropped edges, while "propagation-based" methods avoid information loss but
suffer from prohibitive communication overhead caused by the neighbor
explosion. To jointly address these problems, this paper proposes DIGEST
(DIstributed Graph reprEsentation SynchronizaTion), a novel distributed GNN
training framework that synergizes the complementary strength of both
categories of existing methods. We propose to allow each device to utilize the
stale representations of its neighbors in other subgraphs during subgraph
parallel training. This way, our method preserves global graph information from
neighbors to avoid information loss and reduce communication costs. Our
convergence analysis demonstrates that DIGEST enjoys a state-of-the-art
convergence rate. Extensive experimental evaluation on large, real-world graph
datasets shows that DIGEST achieves up to 21.82 speedups without compromising
performance compared to state-of-the-art distributed GNN training frameworks.Comment: Preprint: 20 pages, 9 figure
- …