78,279 research outputs found
Search Efficient Binary Network Embedding
Traditional network embedding primarily focuses on learning a dense vector
representation for each node, which encodes network structure and/or node
content information, such that off-the-shelf machine learning algorithms can be
easily applied to the vector-format node representations for network analysis.
However, the learned dense vector representations are inefficient for
large-scale similarity search, which requires to find the nearest neighbor
measured by Euclidean distance in a continuous vector space. In this paper, we
propose a search efficient binary network embedding algorithm called BinaryNE
to learn a sparse binary code for each node, by simultaneously modeling node
context relations and node attribute relations through a three-layer neural
network. BinaryNE learns binary node representations efficiently through a
stochastic gradient descent based online learning algorithm. The learned binary
encoding not only reduces memory usage to represent each node, but also allows
fast bit-wise comparisons to support much quicker network node search compared
to Euclidean distance or other distance measures. Our experiments and
comparisons show that BinaryNE not only delivers more than 23 times faster
search speed, but also provides comparable or better search quality than
traditional continuous vector based network embedding methods
End-to-End Cross-Modality Retrieval with CCA Projections and Pairwise Ranking Loss
Cross-modality retrieval encompasses retrieval tasks where the fetched items
are of a different type than the search query, e.g., retrieving pictures
relevant to a given text query. The state-of-the-art approach to cross-modality
retrieval relies on learning a joint embedding space of the two modalities,
where items from either modality are retrieved using nearest-neighbor search.
In this work, we introduce a neural network layer based on Canonical
Correlation Analysis (CCA) that learns better embedding spaces by analytically
computing projections that maximize correlation. In contrast to previous
approaches, the CCA Layer (CCAL) allows us to combine existing objectives for
embedding space learning, such as pairwise ranking losses, with the optimal
projections of CCA. We show the effectiveness of our approach for
cross-modality retrieval on three different scenarios (text-to-image,
audio-sheet-music and zero-shot retrieval), surpassing both Deep CCA and a
multi-view network using freely learned projections optimized by a pairwise
ranking loss, especially when little training data is available (the code for
all three methods is released at: https://github.com/CPJKU/cca_layer).Comment: Preliminary version of a paper published in the International Journal
of Multimedia Information Retrieva
Ramified rectilinear polygons: coordinatization by dendrons
Simple rectilinear polygons (i.e. rectilinear polygons without holes or
cutpoints) can be regarded as finite rectangular cell complexes coordinatized
by two finite dendrons. The intrinsic -metric is thus inherited from the
product of the two finite dendrons via an isometric embedding. The rectangular
cell complexes that share this same embedding property are called ramified
rectilinear polygons. The links of vertices in these cell complexes may be
arbitrary bipartite graphs, in contrast to simple rectilinear polygons where
the links of points are either 4-cycles or paths of length at most 3. Ramified
rectilinear polygons are particular instances of rectangular complexes obtained
from cube-free median graphs, or equivalently simply connected rectangular
complexes with triangle-free links. The underlying graphs of finite ramified
rectilinear polygons can be recognized among graphs in linear time by a
Lexicographic Breadth-First-Search. Whereas the symmetry of a simple
rectilinear polygon is very restricted (with automorphism group being a
subgroup of the dihedral group ), ramified rectilinear polygons are
universal: every finite group is the automorphism group of some ramified
rectilinear polygon.Comment: 27 pages, 6 figure
- …