326,245 research outputs found
Contrastive Representation Learning for Conversational Question Answering over Knowledge Graphs
This paper addresses the task of conversational question answering (ConvQA)
over knowledge graphs (KGs). The majority of existing ConvQA methods rely on
full supervision signals with a strict assumption of the availability of gold
logical forms of queries to extract answers from the KG. However, creating such
a gold logical form is not viable for each potential question in a real-world
scenario. Hence, in the case of missing gold logical forms, the existing
information retrieval-based approaches use weak supervision via heuristics or
reinforcement learning, formulating ConvQA as a KG path ranking problem.
Despite missing gold logical forms, an abundance of conversational contexts,
such as entire dialog history with fluent responses and domain information, can
be incorporated to effectively reach the correct KG path. This work proposes a
contrastive representation learning-based approach to rank KG paths
effectively. Our approach solves two key challenges. Firstly, it allows weak
supervision-based learning that omits the necessity of gold annotations.
Second, it incorporates the conversational context (entire dialog history and
domain information) to jointly learn its homogeneous representation with KG
paths to improve contrastive representations for effective path ranking. We
evaluate our approach on standard datasets for ConvQA, on which it
significantly outperforms existing baselines on all domains and overall.
Specifically, in some cases, the Mean Reciprocal Rank (MRR) and Hit@5 ranking
metrics improve by absolute 10 and 18 points, respectively, compared to the
state-of-the-art performance.Comment: 31st ACM International Conference on Information and Knowledge
Management (CIKM 2022
Comparative analysis of knowledge representation and reasoning requirements across a range of life sciences textbooks.
BackgroundUsing knowledge representation for biomedical projects is now commonplace. In previous work, we represented the knowledge found in a college-level biology textbook in a fashion useful for answering questions. We showed that embedding the knowledge representation and question-answering abilities in an electronic textbook helped to engage student interest and improve learning. A natural question that arises from this success, and this paper's primary focus, is whether a similar approach is applicable across a range of life science textbooks. To answer that question, we considered four different textbooks, ranging from a below-introductory college biology text to an advanced, graduate-level neuroscience textbook. For these textbooks, we investigated the following questions: (1) To what extent is knowledge shared between the different textbooks? (2) To what extent can the same upper ontology be used to represent the knowledge found in different textbooks? (3) To what extent can the questions of interest for a range of textbooks be answered by using the same reasoning mechanisms?ResultsOur existing modeling and reasoning methods apply especially well both to a textbook that is comparable in level to the text studied in our previous work (i.e., an introductory-level text) and to a textbook at a lower level, suggesting potential for a high degree of portability. Even for the overlapping knowledge found across the textbooks, the level of detail covered in each textbook was different, which requires that the representations must be customized for each textbook. We also found that for advanced textbooks, representing models and scientific reasoning processes was particularly important.ConclusionsWith some additional work, our representation methodology would be applicable to a range of textbooks. The requirements for knowledge representation are common across textbooks, suggesting that a shared semantic infrastructure for the life sciences is feasible. Because our representation overlaps heavily with those already being used for biomedical ontologies, this work suggests a natural pathway to include such representations as part of the life sciences curriculum at different grade levels
Ask Me Anything: Free-form Visual Question Answering Based on Knowledge from External Sources
We propose a method for visual question answering which combines an internal
representation of the content of an image with information extracted from a
general knowledge base to answer a broad range of image-based questions. This
allows more complex questions to be answered using the predominant neural
network-based approach than has previously been possible. It particularly
allows questions to be asked about the contents of an image, even when the
image itself does not contain the whole answer. The method constructs a textual
representation of the semantic content of an image, and merges it with textual
information sourced from a knowledge base, to develop a deeper understanding of
the scene viewed. Priming a recurrent neural network with this combined
information, and the submitted question, leads to a very flexible visual
question answering approach. We are specifically able to answer questions posed
in natural language, that refer to information not contained in the image. We
demonstrate the effectiveness of our model on two publicly available datasets,
Toronto COCO-QA and MS COCO-VQA and show that it produces the best reported
results in both cases.Comment: Accepted to IEEE Conf. Computer Vision and Pattern Recognitio
A Benchmark to Understand the Role of Knowledge Graphs on Large Language Model's Accuracy for Question Answering on Enterprise SQL Databases
Enterprise applications of Large Language Models (LLMs) hold promise for
question answering on enterprise SQL databases. However, the extent to which
LLMs can accurately respond to enterprise questions in such databases remains
unclear, given the absence of suitable Text-to-SQL benchmarks tailored to
enterprise settings. Additionally, the potential of Knowledge Graphs (KGs) to
enhance LLM-based question answering by providing business context is not well
understood. This study aims to evaluate the accuracy of LLM-powered question
answering systems in the context of enterprise questions and SQL databases,
while also exploring the role of knowledge graphs in improving accuracy. To
achieve this, we introduce a benchmark comprising an enterprise SQL schema in
the insurance domain, a range of enterprise queries encompassing reporting to
metrics, and a contextual layer incorporating an ontology and mappings that
define a knowledge graph. Our primary finding reveals that question answering
using GPT-4, with zero-shot prompts directly on SQL databases, achieves an
accuracy of 16%. Notably, this accuracy increases to 54% when questions are
posed over a Knowledge Graph representation of the enterprise SQL database.
Therefore, investing in Knowledge Graph provides higher accuracy for LLM
powered question answering systems.Comment: 34 page
- …