113,044 research outputs found
Multimodal Network Alignment
A multimodal network encodes relationships between the same set of nodes in
multiple settings, and network alignment is a powerful tool for transferring
information and insight between a pair of networks. We propose a method for
multimodal network alignment that computes a matrix which indicates the
alignment, but produces the result as a low-rank factorization directly. We
then propose new methods to compute approximate maximum weight matchings of
low-rank matrices to produce an alignment. We evaluate our approach by applying
it on synthetic networks and use it to de-anonymize a multimodal transportation
network.Comment: 14 pages, 6 figures, Siam Data Mining 201
Deep Fragment Embeddings for Bidirectional Image Sentence Mapping
We introduce a model for bidirectional retrieval of images and sentences
through a multi-modal embedding of visual and natural language data. Unlike
previous models that directly map images or sentences into a common embedding
space, our model works on a finer level and embeds fragments of images
(objects) and fragments of sentences (typed dependency tree relations) into a
common space. In addition to a ranking objective seen in previous work, this
allows us to add a new fragment alignment objective that learns to directly
associate these fragments across modalities. Extensive experimental evaluation
shows that reasoning on both the global level of images and sentences and the
finer level of their respective fragments significantly improves performance on
image-sentence retrieval tasks. Additionally, our model provides interpretable
predictions since the inferred inter-modal fragment alignment is explicit
Memory-Efficient Global Refinement of Decision-Tree Ensembles and its Application to Face Alignment
Ren et al. recently introduced a method for aggregating multiple decision
trees into a strong predictor by interpreting a path taken by a sample down
each tree as a binary vector and performing linear regression on top of these
vectors stacked together. They provided experimental evidence that the method
offers advantages over the usual approaches for combining decision trees
(random forests and boosting). The method truly shines when the regression
target is a large vector with correlated dimensions, such as a 2D face shape
represented with the positions of several facial landmarks. However, we argue
that their basic method is not applicable in many practical scenarios due to
large memory requirements. This paper shows how this issue can be solved
through the use of quantization and architectural changes of the predictor that
maps decision tree-derived encodings to the desired output.Comment: BMVC Newcastle 201
- …