61,331 research outputs found
Query-guided End-to-End Person Search
Person search has recently gained attention as the novel task of finding a
person, provided as a cropped sample, from a gallery of non-cropped images,
whereby several other people are also visible. We believe that i. person
detection and re-identification should be pursued in a joint optimization
framework and that ii. the person search should leverage the query image
extensively (e.g. emphasizing unique query patterns). However, so far, no prior
art realizes this. We introduce a novel query-guided end-to-end person search
network (QEEPS) to address both aspects. We leverage a most recent joint
detector and re-identification work, OIM [37]. We extend this with i. a
query-guided Siamese squeeze-and-excitation network (QSSE-Net) that uses global
context from both the query and gallery images, ii. a query-guided region
proposal network (QRPN) to produce query-relevant proposals, and iii. a
query-guided similarity subnetwork (QSimNet), to learn a query-guided
reidentification score. QEEPS is the first end-to-end query-guided detection
and re-id network. On both the most recent CUHK-SYSU [37] and PRW [46]
datasets, we outperform the previous state-of-the-art by a large margin.Comment: Accepted as poster in CVPR 201
End-to-End Neural Ad-hoc Ranking with Kernel Pooling
This paper proposes K-NRM, a kernel based neural model for document ranking.
Given a query and a set of documents, K-NRM uses a translation matrix that
models word-level similarities via word embeddings, a new kernel-pooling
technique that uses kernels to extract multi-level soft match features, and a
learning-to-rank layer that combines those features into the final ranking
score. The whole model is trained end-to-end. The ranking layer learns desired
feature patterns from the pairwise ranking loss. The kernels transfer the
feature patterns into soft-match targets at each similarity level and enforce
them on the translation matrix. The word embeddings are tuned accordingly so
that they can produce the desired soft matches. Experiments on a commercial
search engine's query log demonstrate the improvements of K-NRM over prior
feature-based and neural-based states-of-the-art, and explain the source of
K-NRM's advantage: Its kernel-guided embedding encodes a similarity metric
tailored for matching query words to document words, and provides effective
multi-level soft matches
AMC: Attention guided Multi-modal Correlation Learning for Image Search
Given a user's query, traditional image search systems rank images according
to its relevance to a single modality (e.g., image content or surrounding
text). Nowadays, an increasing number of images on the Internet are available
with associated meta data in rich modalities (e.g., titles, keywords, tags,
etc.), which can be exploited for better similarity measure with queries. In
this paper, we leverage visual and textual modalities for image search by
learning their correlation with input query. According to the intent of query,
attention mechanism can be introduced to adaptively balance the importance of
different modalities. We propose a novel Attention guided Multi-modal
Correlation (AMC) learning method which consists of a jointly learned hierarchy
of intra and inter-attention networks. Conditioned on query's intent,
intra-attention networks (i.e., visual intra-attention network and language
intra-attention network) attend on informative parts within each modality; a
multi-modal inter-attention network promotes the importance of the most
query-relevant modalities. In experiments, we evaluate AMC models on the search
logs from two real world image search engines and show a significant boost on
the ranking of user-clicked images in search results. Additionally, we extend
AMC models to caption ranking task on COCO dataset and achieve competitive
results compared with recent state-of-the-arts.Comment: CVPR 201
- …