34,948 research outputs found
Deep Metric Learning Assisted by Intra-variance in A Semi-supervised View of Learning
Deep metric learning aims to construct an embedding space where samples of
the same class are close to each other, while samples of different classes are
far away from each other. Most existing deep metric learning methods attempt to
maximize the difference of inter-class features. And semantic related
information is obtained by increasing the distance between samples of different
classes in the embedding space. However, compressing all positive samples
together while creating large margins between different classes unconsciously
destroys the local structure between similar samples. Ignoring the intra-class
variance contained in the local structure between similar samples, the
embedding space obtained from training receives lower generalizability over
unseen classes, which would lead to the network overfitting the training set
and crashing on the test set. To address these considerations, this paper
designs a self-supervised generative assisted ranking framework that provides a
semi-supervised view of intra-class variance learning scheme for typical
supervised deep metric learning. Specifically, this paper performs sample
synthesis with different intensities and diversity for samples satisfying
certain conditions to simulate the complex transformation of intra-class
samples. And an intra-class ranking loss function is designed using the idea of
self-supervised learning to constrain the network to maintain the intra-class
distribution during the training process to capture the subtle intra-class
variance. With this approach, a more realistic embedding space can be obtained
in which global and local structures of samples are well preserved, thus
enhancing the effectiveness of downstream tasks. Extensive experiments on four
benchmarks have shown that this approach surpasses state-of-the-art method
Structural Deep Embedding for Hyper-Networks
Network embedding has recently attracted lots of attentions in data mining.
Existing network embedding methods mainly focus on networks with pairwise
relationships. In real world, however, the relationships among data points
could go beyond pairwise, i.e., three or more objects are involved in each
relationship represented by a hyperedge, thus forming hyper-networks. These
hyper-networks pose great challenges to existing network embedding methods when
the hyperedges are indecomposable, that is to say, any subset of nodes in a
hyperedge cannot form another hyperedge. These indecomposable hyperedges are
especially common in heterogeneous networks. In this paper, we propose a novel
Deep Hyper-Network Embedding (DHNE) model to embed hyper-networks with
indecomposable hyperedges. More specifically, we theoretically prove that any
linear similarity metric in embedding space commonly used in existing methods
cannot maintain the indecomposibility property in hyper-networks, and thus
propose a new deep model to realize a non-linear tuplewise similarity function
while preserving both local and global proximities in the formed embedding
space. We conduct extensive experiments on four different types of
hyper-networks, including a GPS network, an online social network, a drug
network and a semantic network. The empirical results demonstrate that our
method can significantly and consistently outperform the state-of-the-art
algorithms.Comment: Accepted by AAAI 1
Distributed execution of bigraphical reactive systems
The bigraph embedding problem is crucial for many results and tools about
bigraphs and bigraphical reactive systems (BRS). Current algorithms for
computing bigraphical embeddings are centralized, i.e. designed to run locally
with a complete view of the guest and host bigraphs. In order to deal with
large bigraphs, and to parallelize reactions, we present a decentralized
algorithm, which distributes both state and computation over several concurrent
processes. This allows for distributed, parallel simulations where
non-interfering reactions can be carried out concurrently; nevertheless, even
in the worst case the complexity of this distributed algorithm is no worse than
that of a centralized algorithm
- …