35,603 research outputs found
Towards Fast Adaptation of Pretrained Contrastive Models for Multi-channel Video-Language Retrieval
Multi-channel video-language retrieval require models to understand
information from different channels (e.g. videoquestion, videospeech) to
correctly link a video with a textual response or query. Fortunately,
contrastive multimodal models are shown to be highly effective at aligning
entities in images/videos and text, e.g., CLIP; text contrastive models are
extensively studied recently for their strong ability of producing
discriminative sentence embeddings, e.g., SimCSE. However, there is not a clear
way to quickly adapt these two lines to multi-channel video-language retrieval
with limited data and resources. In this paper, we identify a principled model
design space with two axes: how to represent videos and how to fuse video and
text information. Based on categorization of recent methods, we investigate the
options of representing videos using continuous feature vectors or discrete
text tokens; for the fusion method, we explore the use of a multimodal
transformer or a pretrained contrastive text model. We extensively evaluate the
four combinations on five video-language datasets. We surprisingly find that
discrete text tokens coupled with a pretrained contrastive text model yields
the best performance, which can even outperform state-of-the-art on the iVQA
and How2QA datasets without additional training on millions of video-text data.
Further analysis shows that this is because representing videos as text tokens
captures the key visual information and text tokens are naturally aligned with
text models that are strong retrievers after the contrastive pretraining
process. All the empirical analysis establishes a solid foundation for future
research on affordable and upgradable multimodal intelligence.Comment: To appear in CVPR 2023; The code will be released at
https://github.com/XudongLinthu/upgradable-multimodal-intelligenc
What Users Ask a Search Engine: Analyzing One Billion Russian Question Queries
We analyze the question queries submitted to a large commercial web search engine to get insights about what people ask, and to better tailor the search results to the users’ needs. Based on a dataset of about one billion question queries submitted during the year 2012, we investigate askers’ querying behavior with the support of automatic query categorization. While the importance of question queries is likely to increase, at present they only make up 3–4% of the total search traffic. Since questions are such a small part of the query stream and are more likely to be unique than shorter queries, clickthrough information is typically rather sparse. Thus, query categorization methods based on the categories of clicked web documents do not work well for questions. As an alternative, we propose a robust question query classification method that uses the labeled questions from a large community question answering platform (CQA) as a training set. The resulting classifier is then transferred to the web search questions. Even though questions on CQA platforms tend to be different to web search questions, our categorization method proves competitive with strong baselines with respect to classification accuracy. To show the scalability of our proposed method we apply the classifiers to about one billion question queries and discuss the trade-offs between performance and accuracy that different classification models offer. Our findings reveal what people ask a search engine and also how this contrasts behavior on a CQA platform
A Large-Scale Community Questions Classification Accounting for Category Similarity: An Exploratory?
The paper reports on a large-scale topical categorization of questions from a Russian community question answering (CQA) service [email protected]. We used a data set containing all the questions (more than 11 millions) asked by [email protected] users in 2012. This is the first study on question categorization dealing with non-English data of this size. The study focuses on adjusting category structure in order to get more robust classification results. We investigate several approaches to measure similarity between categories: the share of identical questions, language models, and user activity. The results show that the proposed approach is promising.14-07-00589; RFBR; Russian Foundation for Basic Research
Recommended from our members
Semantic memory redux: an experimental test of hierarchical category representation
Four experiments investigated the classic issue in semantic memory of whether people organize categorical information in hierarchies and use inference to retrieve information from them, as proposed by Collins & Quillian (1969). Past evidence has focused on RT to confirm sentences such as “All birds are animals” or “Canaries breathe.” However, confounding variables such as familiarity and associations between the terms have led to contradictory results. Our experiments avoided such problems by teaching subjects novel materials. Experiment 1 tested an implicit hierarchical structure in the features of a set of studied objects (e.g., all brown objects were large). Experiment 2 taught subjects nested categories of artificial bugs. In Experiment 3, subjects learned a tree structure of novel category hierarchies. In all three, the results differed from the predictions of the hierarchical inference model. In Experiment 4, subjects learned a hierarchy by means of paired associates of novel category names. Here we finally found the RT signature of hierarchical inference. We conclude that it is possible to store information in a hierarchy and retrieve it via inference, but it is difficult and avoided whenever possible. The results are more consistent with feature comparison models than hierarchical models of semantic memory
- …