157,638 research outputs found

    Interaction history based answer formulation for question answering

    Get PDF
    With the rapid growth in information access methodologies, question answering has drawn considerable attention among others. Though question answering has emerged as an interesting new research domain, still it is vastly concentrated on question processing and answer extraction approaches. Latter steps like answer ranking, formulation and presentations are not treated in depth. Weakness we found in this arena is that answers that a particular user has acquired are not considered, when processing new questions. As a result, current systems are not capable of linking two questions such as “When is the Apple founded?” with a previously processed question “When is the Microsoft founded?” generating an answer in the form of “Apple is founded one year later Microsoft founded, in 1976”. In this paper we present an approach towards question answering to devise an answer based on the questions already processed by the system for a particular user which is termed as interaction history for the user. Our approach is a combination of question processing, relation extraction and knowledge representation with inference models. During the process we primarily focus on acquiring knowledge and building up a scalable user model to formulate future answers based on current answers that same user has processed. According to evaluation we carried out based on the TREC resources shows that proposed technology is promising and effective in question answering

    Ans Form : answer formulation for question-answering

    Get PDF
    The goal of Question-Answering (QA) systems is to find short and correct answers to open-domain questions by searching a large collection of documents. The subject of this research is focused on finding patterns to formulate a "complete" and "natural" answer to questions, given the short answer. Finding such patterns is important as they can be used to enhance existing QA systems to find and provide answers to the user in a more "natural way" and providing a pattern to find the answer. Based on a number of patterns of type of answer formulation compiled from a survey carried out at the beginning of this project, the work of producing long and natural answers to specific type of questions is reduced to patterns matching, but additional research could be done to process and analyze the context of the question and the short answer given, in order to provide a more relevant, natural and correct answers. The first chapter of this major report gives a general description of the nature and scope of our project, the second chapter introduces the fields of Natural Language Generation (NLG) and Question-Answering systems (QA) as a background to our system, which we expect to be used as a resource to increase the potential of QA system, to answer questions of a wide variety of topics in different grammatical and natural ways. The third chapter presents the process, analysis and results of the survey applied to find the answer formulation patterns. The fourth chapter describes the system, its requirements, scope, analysis and results, and finally chapter five shows the evaluation of the system and gives the conclusions and future research avenues

    Understanding Video Scenes through Text: Insights from Text-based Video Question Answering

    Full text link
    Researchers have extensively studied the field of vision and language, discovering that both visual and textual content is crucial for understanding scenes effectively. Particularly, comprehending text in videos holds great significance, requiring both scene text understanding and temporal reasoning. This paper focuses on exploring two recently introduced datasets, NewsVideoQA and M4-ViteVQA, which aim to address video question answering based on textual content. The NewsVideoQA dataset contains question-answer pairs related to the text in news videos, while M4-ViteVQA comprises question-answer pairs from diverse categories like vlogging, traveling, and shopping. We provide an analysis of the formulation of these datasets on various levels, exploring the degree of visual understanding and multi-frame comprehension required for answering the questions. Additionally, the study includes experimentation with BERT-QA, a text-only model, which demonstrates comparable performance to the original methods on both datasets, indicating the shortcomings in the formulation of these datasets. Furthermore, we also look into the domain adaptation aspect by examining the effectiveness of training on M4-ViteVQA and evaluating on NewsVideoQA and vice-versa, thereby shedding light on the challenges and potential benefits of out-of-domain training

    Ask Your Neurons: A Neural-based Approach to Answering Questions about Images

    Full text link
    We address a question answering task on real-world images that is set up as a Visual Turing Test. By combining latest advances in image representation and natural language processing, we propose Neural-Image-QA, an end-to-end formulation to this problem for which all parts are trained jointly. In contrast to previous efforts, we are facing a multi-modal problem where the language output (answer) is conditioned on visual and natural language input (image and question). Our approach Neural-Image-QA doubles the performance of the previous best approach on this problem. We provide additional insights into the problem by analyzing how much information is contained only in the language part for which we provide a new human baseline. To study human consensus, which is related to the ambiguities inherent in this challenging task, we propose two novel metrics and collect additional answers which extends the original DAQUAR dataset to DAQUAR-Consensus.Comment: ICCV'15 (Oral

    Compact Tensor Pooling for Visual Question Answering

    Get PDF
    Performing high level cognitive tasks requires the integration of feature maps with drastically different structure. In Visual Question Answering (VQA) image descriptors have spatial structures, while lexical inputs inherently follow a temporal sequence. The recently proposed Multimodal Compact Bilinear pooling (MCB) forms the outer products, via count-sketch approximation, of the visual and textual representation at each spatial location. While this procedure preserves spatial information locally, outer-products are taken independently for each fiber of the activation tensor, and therefore do not include spatial context. In this work, we introduce multi-dimensional sketch ({MD-sketch}), a novel extension of count-sketch to tensors. Using this new formulation, we propose Multimodal Compact Tensor Pooling (MCT) to fully exploit the global spatial context during bilinear pooling operations. Contrarily to MCB, our approach preserves spatial context by directly convolving the MD-sketch from the visual tensor features with the text vector feature using higher order FFT. Furthermore we apply MCT incrementally at each step of the question embedding and accumulate the multi-modal vectors with a second LSTM layer before the final answer is chosen
    • …
    corecore