11,063 research outputs found
Adapting Visual Question Answering Models for Enhancing Multimodal Community Q&A Platforms
Question categorization and expert retrieval methods have been crucial for
information organization and accessibility in community question & answering
(CQA) platforms. Research in this area, however, has dealt with only the text
modality. With the increasing multimodal nature of web content, we focus on
extending these methods for CQA questions accompanied by images. Specifically,
we leverage the success of representation learning for text and images in the
visual question answering (VQA) domain, and adapt the underlying concept and
architecture for automated category classification and expert retrieval on
image-based questions posted on Yahoo! Chiebukuro, the Japanese counterpart of
Yahoo! Answers.
To the best of our knowledge, this is the first work to tackle the
multimodality challenge in CQA, and to adapt VQA models for tasks on a more
ecologically valid source of visual questions. Our analysis of the differences
between visual QA and community QA data drives our proposal of novel
augmentations of an attention method tailored for CQA, and use of auxiliary
tasks for learning better grounding features. Our final model markedly
outperforms the text-only and VQA model baselines for both tasks of
classification and expert retrieval on real-world multimodal CQA data.Comment: Submitted for review at CIKM 201
Evaluating Collaborative Information Seeking Interfaces with a Search-Oriented Inspection Method and Re-framed Information Seeking Theory
Despite the many implicit references to the social contexts of search within Information Seeking and Retrieval research, there has been relatively little work that has specifically investigated the additional requirements for collaborative information seeking interfaces. Here, we re-assess a recent analytical inspection framework, designed for individual information seeking, and then apply it to evaluate a recent collaborative information seeking interface: SearchTogether. The framework was built upon two models of solitary information seeking, and so as part of the re-assessment we first re-frame the models for collaborative contexts. We re-frame a model of search tactics, providing revised definitions that consider known collaborators. We then re-frame a model of user profiles to analyse support for different group dynamics. After presenting an analysis of SearchTogether, we reflect on its accuracy, showing that the framework identified 8 known truths, 8 new insights, and no known-to-be-untrue insights into the design. We conclude that the framework a) can still be applied to collaborative information seeking interfaces; b) can successfully produce additional requirements for collaborative information seeking interfaces; and c) can successfully model different dynamics of collaborating searchers
What Users Ask a Search Engine: Analyzing One Billion Russian Question Queries
We analyze the question queries submitted to a large commercial web search engine to get insights about what people ask, and to better tailor the search results to the users’ needs. Based on a dataset of about one billion question queries submitted during the year 2012, we investigate askers’ querying behavior with the support of automatic query categorization. While the importance of question queries is likely to increase, at present they only make up 3–4% of the total search traffic. Since questions are such a small part of the query stream and are more likely to be unique than shorter queries, clickthrough information is typically rather sparse. Thus, query categorization methods based on the categories of clicked web documents do not work well for questions. As an alternative, we propose a robust question query classification method that uses the labeled questions from a large community question answering platform (CQA) as a training set. The resulting classifier is then transferred to the web search questions. Even though questions on CQA platforms tend to be different to web search questions, our categorization method proves competitive with strong baselines with respect to classification accuracy. To show the scalability of our proposed method we apply the classifiers to about one billion question queries and discuss the trade-offs between performance and accuracy that different classification models offer. Our findings reveal what people ask a search engine and also how this contrasts behavior on a CQA platform
- …