3,451 research outputs found
Composite Correlation Quantization for Efficient Multimodal Retrieval
Efficient similarity retrieval from large-scale multimodal database is
pervasive in modern search engines and social networks. To support queries
across content modalities, the system should enable cross-modal correlation and
computation-efficient indexing. While hashing methods have shown great
potential in achieving this goal, current attempts generally fail to learn
isomorphic hash codes in a seamless scheme, that is, they embed multiple
modalities in a continuous isomorphic space and separately threshold embeddings
into binary codes, which incurs substantial loss of retrieval accuracy. In this
paper, we approach seamless multimodal hashing by proposing a novel Composite
Correlation Quantization (CCQ) model. Specifically, CCQ jointly finds
correlation-maximal mappings that transform different modalities into
isomorphic latent space, and learns composite quantizers that convert the
isomorphic latent features into compact binary codes. An optimization framework
is devised to preserve both intra-modal similarity and inter-modal correlation
through minimizing both reconstruction and quantization errors, which can be
trained from both paired and partially paired data in linear time. A
comprehensive set of experiments clearly show the superior effectiveness and
efficiency of CCQ against the state of the art hashing methods for both
unimodal and cross-modal retrieval
Acquiring Word-Meaning Mappings for Natural Language Interfaces
This paper focuses on a system, WOLFIE (WOrd Learning From Interpreted
Examples), that acquires a semantic lexicon from a corpus of sentences paired
with semantic representations. The lexicon learned consists of phrases paired
with meaning representations. WOLFIE is part of an integrated system that
learns to transform sentences into representations such as logical database
queries. Experimental results are presented demonstrating WOLFIE's ability to
learn useful lexicons for a database interface in four different natural
languages. The usefulness of the lexicons learned by WOLFIE are compared to
those acquired by a similar system, with results favorable to WOLFIE. A second
set of experiments demonstrates WOLFIE's ability to scale to larger and more
difficult, albeit artificially generated, corpora. In natural language
acquisition, it is difficult to gather the annotated data needed for supervised
learning; however, unannotated data is fairly plentiful. Active learning
methods attempt to select for annotation and training only the most informative
examples, and therefore are potentially very useful in natural language
applications. However, most results to date for active learning have only
considered standard classification tasks. To reduce annotation effort while
maintaining accuracy, we apply active learning to semantic lexicons. We show
that active learning can significantly reduce the number of annotated examples
required to achieve a given level of performance
AlignFlow: Cycle Consistent Learning from Multiple Domains via Normalizing Flows
Given datasets from multiple domains, a key challenge is to efficiently
exploit these data sources for modeling a target domain. Variants of this
problem have been studied in many contexts, such as cross-domain translation
and domain adaptation. We propose AlignFlow, a generative modeling framework
that models each domain via a normalizing flow. The use of normalizing flows
allows for a) flexibility in specifying learning objectives via adversarial
training, maximum likelihood estimation, or a hybrid of the two methods; and b)
learning and exact inference of a shared representation in the latent space of
the generative model. We derive a uniform set of conditions under which
AlignFlow is marginally-consistent for the different learning objectives.
Furthermore, we show that AlignFlow guarantees exact cycle consistency in
mapping datapoints from a source domain to target and back to the source
domain. Empirically, AlignFlow outperforms relevant baselines on image-to-image
translation and unsupervised domain adaptation and can be used to
simultaneously interpolate across the various domains using the learned
representation.Comment: AAAI 202
- …