871 research outputs found
Negative Statements Considered Useful
Knowledge bases (KBs), pragmatic collections of knowledge about notable entities, are an important asset in applications such as search, question answering and dialogue. Rooted in a long tradition in knowledge representation, all popular KBs only store positive information, while they abstain from taking any stance towards statements not contained in them. In this paper, we make the case for explicitly stating interesting statements which are not true. Negative statements would be important to overcome current limitations of question answering, yet due to their potential abundance, any effort towards compiling them needs a tight coupling with ranking. We introduce two approaches towards compiling negative statements. (i) In peer-based statistical inferences, we compare entities with highly related entities in order to derive potential negative statements, which we then rank using supervised and unsupervised features. (ii) In query-log-based text extraction, we use a pattern-based approach for harvesting search engine query logs. Experimental results show that both approaches hold promising and complementary potential. Along with this paper, we publish the first datasets on interesting negative information, containing over 1.1M statements for 100K popular Wikidata entities
Discovering Implicational Knowledge in Wikidata
Knowledge graphs have recently become the state-of-the-art tool for
representing the diverse and complex knowledge of the world. Examples include
the proprietary knowledge graphs of companies such as Google, Facebook, IBM, or
Microsoft, but also freely available ones such as YAGO, DBpedia, and Wikidata.
A distinguishing feature of Wikidata is that the knowledge is collaboratively
edited and curated. While this greatly enhances the scope of Wikidata, it also
makes it impossible for a single individual to grasp complex connections
between properties or understand the global impact of edits in the graph. We
apply Formal Concept Analysis to efficiently identify comprehensible
implications that are implicitly present in the data. Although the complex
structure of data modelling in Wikidata is not amenable to a direct approach,
we overcome this limitation by extracting contextual representations of parts
of Wikidata in a systematic fashion. We demonstrate the practical feasibility
of our approach through several experiments and show that the results may lead
to the discovery of interesting implicational knowledge. Besides providing a
method for obtaining large real-world data sets for FCA, we sketch potential
applications in offering semantic assistance for editing and curating Wikidata
Enriching Knowledge Bases with Counting Quantifiers
Information extraction traditionally focuses on extracting relations between
identifiable entities, such as . Yet, texts
often also contain Counting information, stating that a subject is in a
specific relation with a number of objects, without mentioning the objects
themselves, for example, "California is divided into 58 counties". Such
counting quantifiers can help in a variety of tasks such as query answering or
knowledge base curation, but are neglected by prior work. This paper develops
the first full-fledged system for extracting counting information from text,
called CINEX. We employ distant supervision using fact counts from a knowledge
base as training seeds, and develop novel techniques for dealing with several
challenges: (i) non-maximal training seeds due to the incompleteness of
knowledge bases, (ii) sparse and skewed observations in text sources, and (iii)
high diversity of linguistic patterns. Experiments with five human-evaluated
relations show that CINEX can achieve 60% average precision for extracting
counting information. In a large-scale experiment, we demonstrate the potential
for knowledge base enrichment by applying CINEX to 2,474 frequent relations in
Wikidata. CINEX can assert the existence of 2.5M facts for 110 distinct
relations, which is 28% more than the existing Wikidata facts for these
relations.Comment: 16 pages, The 17th International Semantic Web Conference (ISWC 2018
Ensuring Readability and Data-fidelity using Head-modifier Templates in Deep Type Description Generation
A type description is a succinct noun compound which helps human and machines
to quickly grasp the informative and distinctive information of an entity.
Entities in most knowledge graphs (KGs) still lack such descriptions, thus
calling for automatic methods to supplement such information. However, existing
generative methods either overlook the grammatical structure or make factual
mistakes in generated texts. To solve these problems, we propose a
head-modifier template-based method to ensure the readability and data fidelity
of generated type descriptions. We also propose a new dataset and two automatic
metrics for this task. Experiments show that our method improves substantially
compared with baselines and achieves state-of-the-art performance on both
datasets.Comment: ACL 201
Answering Complex Questions by Joining Multi-Document Evidence with Quasi Knowledge Graphs
Direct answering of questions that involve multiple entities and relations is a challenge for text-based QA. This problem is most pronounced when answers can be found only by joining evidence from multiple documents. Curated knowledge graphs (KGs) may yield good answers, but are limited by their inherent incompleteness and potential staleness. This paper presents QUEST, a method that can answer complex questions directly from textual sources on-the-fly, by computing similarity joins over partial results from different documents. Our method is completely unsupervised, avoiding training-data bottlenecks and being able to cope with rapidly evolving ad hoc topics and formulation style in user questions. QUEST builds a noisy quasi KG with node and edge weights, consisting of dynamically retrieved entity names and relational phrases. It augments this graph with types and semantic alignments, and computes the best answers by an algorithm for Group Steiner Trees. We evaluate QUEST on benchmarks of complex questions, and show that it substantially outperforms state-of-the-art baselines
- …