3 research outputs found
GETT-QA: Graph Embedding based T2T Transformer for Knowledge Graph Question Answering
In this work, we present an end-to-end Knowledge Graph Question Answering
(KGQA) system named GETT-QA. GETT-QA uses T5, a popular text-to-text
pre-trained language model. The model takes a question in natural language as
input and produces a simpler form of the intended SPARQL query. In the simpler
form, the model does not directly produce entity and relation IDs. Instead, it
produces corresponding entity and relation labels. The labels are grounded to
KG entity and relation IDs in a subsequent step. To further improve the
results, we instruct the model to produce a truncated version of the KG
embedding for each entity. The truncated KG embedding enables a finer search
for disambiguation purposes. We find that T5 is able to learn the truncated KG
embeddings without any change of loss function, improving KGQA performance. As
a result, we report strong results for LC-QuAD 2.0 and SimpleQuestions-Wikidata
datasets on end-to-end KGQA over Wikidata.Comment: 16 pages single column format accepted at ESWC 2023 research trac
A systematic literature review on Wikidata
To review the current status of research on Wikidata and, in particular, of articles that either describe applications of Wikidata or provide empirical evidence, in order to uncover the topics of interest, the fields that are benefiting from its applications and which researchers and institutions are leading the work
Demoing Platypus – A Multilingual Question Answering Platform for Wikidata
International audienceIn this paper we present Platypus, a natural language question answering system on Wikidata. Our platform can answer complex queries in several languages, using hybrid grammatical and template based techniques. Our demo allows users either to select sample questions , or formulate their own – in any of the 3 languages that we currently support. A user can also try out our Twitter bot, which replies to any tweet that is sent to its account