109,703 research outputs found
struc2gauss: Structure Preserving Network Embedding via Gaussian Embedding
Network embedding (NE) is playing a principal role in network mining, due to
its ability to map nodes into efficient low-dimensional embedding vectors.
However, two major limitations exist in state-of-the-art NE methods: structure
preservation and uncertainty modeling. Almost all previous methods represent a
node into a point in space and focus on the local structural information, i.e.,
neighborhood information. However, neighborhood information does not capture
the global structural information and point vector representation fails in
modeling the uncertainty of node representations. In this paper, we propose a
new NE framework, struc2gauss, which learns node representations in the space
of Gaussian distributions and performs network embedding based on global
structural information. struc2gauss first employs a given node similarity
metric to measure the global structural information, then generates structural
context for nodes and finally learns node representations via Gaussian
embedding. Different structural similarity measures of networks and energy
functions of Gaussian embedding are investigated. Experiments conducted on both
synthetic and real-world data sets demonstrate that struc2gauss effectively
captures the global structural information while state-of-the-art network
embedding methods fails to, outperforms other methods on the structure-based
clustering task and provides more information on uncertainties of node
representations
Representation Learning for Scale-free Networks
Network embedding aims to learn the low-dimensional representations of
vertexes in a network, while structure and inherent properties of the network
is preserved. Existing network embedding works primarily focus on preserving
the microscopic structure, such as the first- and second-order proximity of
vertexes, while the macroscopic scale-free property is largely ignored.
Scale-free property depicts the fact that vertex degrees follow a heavy-tailed
distribution (i.e., only a few vertexes have high degrees) and is a critical
property of real-world networks, such as social networks. In this paper, we
study the problem of learning representations for scale-free networks. We first
theoretically analyze the difficulty of embedding and reconstructing a
scale-free network in the Euclidean space, by converting our problem to the
sphere packing problem. Then, we propose the "degree penalty" principle for
designing scale-free property preserving network embedding algorithm: punishing
the proximity between high-degree vertexes. We introduce two implementations of
our principle by utilizing the spectral techniques and a skip-gram model
respectively. Extensive experiments on six datasets show that our algorithms
are able to not only reconstruct heavy-tailed distributed degree distribution,
but also outperform state-of-the-art embedding models in various network mining
tasks, such as vertex classification and link prediction.Comment: 8 figures; accepted by AAAI 201
Quantum Theory is a Quasi-stochastic Process Theory
There is a long history of representing a quantum state using a
quasi-probability distribution: a distribution allowing negative values. In
this paper we extend such representations to deal with quantum channels. The
result is a convex, strongly monoidal, functorial embedding of the category of
trace preserving completely positive maps into the category of quasi-stochastic
matrices. This establishes quantum theory as a subcategory of quasi-stochastic
processes. Such an embedding is induced by a choice of minimal informationally
complete POVM's. We show that any two such embeddings are naturally isomorphic.
The embedding preserves the dagger structure of the categories if and only if
the POVM's are symmetric, giving a new use of SIC-POVM's, objects that are of
foundational interest in the QBism community. We also study general convex
embeddings of quantum theory and prove a dichotomy that such an embedding is
either trivial or faithful.Comment: In Proceedings QPL 2017, arXiv:1802.0973
A study of the classification of low-dimensional data with supervised manifold learning
Supervised manifold learning methods learn data representations by preserving
the geometric structure of data while enhancing the separation between data
samples from different classes. In this work, we propose a theoretical study of
supervised manifold learning for classification. We consider nonlinear
dimensionality reduction algorithms that yield linearly separable embeddings of
training data and present generalization bounds for this type of algorithms. A
necessary condition for satisfactory generalization performance is that the
embedding allow the construction of a sufficiently regular interpolation
function in relation with the separation margin of the embedding. We show that
for supervised embeddings satisfying this condition, the classification error
decays at an exponential rate with the number of training samples. Finally, we
examine the separability of supervised nonlinear embeddings that aim to
preserve the low-dimensional geometric structure of data based on graph
representations. The proposed analysis is supported by experiments on several
real data sets
- …