166 research outputs found
Context-aware Path Ranking for Knowledge Base Completion
Knowledge base (KB) completion aims to infer missing facts from existing ones
in a KB. Among various approaches, path ranking (PR) algorithms have received
increasing attention in recent years. PR algorithms enumerate paths between
entity pairs in a KB and use those paths as features to train a model for
missing fact prediction. Due to their good performances and high model
interpretability, several methods have been proposed. However, most existing
methods suffer from scalability (high RAM consumption) and feature explosion
(trains on an exponentially large number of features) problems. This paper
proposes a Context-aware Path Ranking (C-PR) algorithm to solve these problems
by introducing a selective path exploration strategy. C-PR learns global
semantics of entities in the KB using word embedding and leverages the
knowledge of entity semantics to enumerate contextually relevant paths using
bidirectional random walk. Experimental results on three large KBs show that
the path features (fewer in number) discovered by C-PR not only improve
predictive performance but also are more interpretable than existing baselines
Knowledge and Reasoning for Image Understanding
abstract: Image Understanding is a long-established discipline in computer vision, which encompasses a body of advanced image processing techniques, that are used to locate (“where”), characterize and recognize (“what”) objects, regions, and their attributes in the image. However, the notion of “understanding” (and the goal of artificial intelligent machines) goes beyond factual recall of the recognized components and includes reasoning and thinking beyond what can be seen (or perceived). Understanding is often evaluated by asking questions of increasing difficulty. Thus, the expected functionalities of an intelligent Image Understanding system can be expressed in terms of the functionalities that are required to answer questions about an image. Answering questions about images require primarily three components: Image Understanding, question (natural language) understanding, and reasoning based on knowledge. Any question, asking beyond what can be directly seen, requires modeling of commonsense (or background/ontological/factual) knowledge and reasoning.
Knowledge and reasoning have seen scarce use in image understanding applications. In this thesis, we demonstrate the utilities of incorporating background knowledge and using explicit reasoning in image understanding applications. We first present a comprehensive survey of the previous work that utilized background knowledge and reasoning in understanding images. This survey outlines the limited use of commonsense knowledge in high-level applications. We then present a set of vision and reasoning-based methods to solve several applications and show that these approaches benefit in terms of accuracy and interpretability from the explicit use of knowledge and reasoning. We propose novel knowledge representations of image, knowledge acquisition methods, and a new implementation of an efficient probabilistic logical reasoning engine that can utilize publicly available commonsense knowledge to solve applications such as visual question answering, image puzzles. Additionally, we identify the need for new datasets that explicitly require external commonsense knowledge to solve. We propose the new task of Image Riddles, which requires a combination of vision, and reasoning based on ontological knowledge; and we collect a sufficiently large dataset to serve as an ideal testbed for vision and reasoning research. Lastly, we propose end-to-end deep architectures that can combine vision, knowledge and reasoning modules together and achieve large performance boosts over state-of-the-art methods.Dissertation/ThesisDoctoral Dissertation Computer Science 201
Visually Grounded Commonsense Knowledge Acquisition
Large-scale commonsense knowledge bases empower a broad range of AI
applications, where the automatic extraction of commonsense knowledge (CKE) is
a fundamental and challenging problem. CKE from text is known for suffering
from the inherent sparsity and reporting bias of commonsense in text. Visual
perception, on the other hand, contains rich commonsense knowledge about
real-world entities, e.g., (person, can_hold, bottle), which can serve as
promising sources for acquiring grounded commonsense knowledge. In this work,
we present CLEVER, which formulates CKE as a distantly supervised
multi-instance learning problem, where models learn to summarize commonsense
relations from a bag of images about an entity pair without any human
annotation on image instances. To address the problem, CLEVER leverages
vision-language pre-training models for deep understanding of each image in the
bag, and selects informative instances from the bag to summarize commonsense
entity relations via a novel contrastive attention mechanism. Comprehensive
experimental results in held-out and human evaluation show that CLEVER can
extract commonsense knowledge in promising quality, outperforming pre-trained
language model-based methods by 3.9 AUC and 6.4 mAUC points. The predicted
commonsense scores show strong correlation with human judgment with a 0.78
Spearman coefficient. Moreover, the extracted commonsense can also be grounded
into images with reasonable interpretability. The data and codes can be
obtained at https://github.com/thunlp/CLEVER.Comment: Accepted by AAAI 202
Knowledge Base Population using Semantic Label Propagation
A crucial aspect of a knowledge base population system that extracts new
facts from text corpora, is the generation of training data for its relation
extractors. In this paper, we present a method that maximizes the effectiveness
of newly trained relation extractors at a minimal annotation cost. Manual
labeling can be significantly reduced by Distant Supervision, which is a method
to construct training data automatically by aligning a large text corpus with
an existing knowledge base of known facts. For example, all sentences
mentioning both 'Barack Obama' and 'US' may serve as positive training
instances for the relation born_in(subject,object). However, distant
supervision typically results in a highly noisy training set: many training
sentences do not really express the intended relation. We propose to combine
distant supervision with minimal manual supervision in a technique called
feature labeling, to eliminate noise from the large and noisy initial training
set, resulting in a significant increase of precision. We further improve on
this approach by introducing the Semantic Label Propagation method, which uses
the similarity between low-dimensional representations of candidate training
instances, to extend the training set in order to increase recall while
maintaining high precision. Our proposed strategy for generating training data
is studied and evaluated on an established test collection designed for
knowledge base population tasks. The experimental results show that the
Semantic Label Propagation strategy leads to substantial performance gains when
compared to existing approaches, while requiring an almost negligible manual
annotation effort.Comment: Submitted to Knowledge Based Systems, special issue on Knowledge
Bases for Natural Language Processin
Combining Representation Learning with Logic for Language Processing
The current state-of-the-art in many natural language processing and
automated knowledge base completion tasks is held by representation learning
methods which learn distributed vector representations of symbols via
gradient-based optimization. They require little or no hand-crafted features,
thus avoiding the need for most preprocessing steps and task-specific
assumptions. However, in many cases representation learning requires a large
amount of annotated training data to generalize well to unseen data. Such
labeled training data is provided by human annotators who often use formal
logic as the language for specifying annotations. This thesis investigates
different combinations of representation learning methods with logic for
reducing the need for annotated training data, and for improving
generalization.Comment: PhD Thesis, University College London, Submitted and accepted in 201
- …