7 research outputs found
Generative Adversarial Zero-Shot Relational Learning for Knowledge Graphs
Large-scale knowledge graphs (KGs) are shown to become more important in
current information systems. To expand the coverage of KGs, previous studies on
knowledge graph completion need to collect adequate training instances for
newly-added relations. In this paper, we consider a novel formulation,
zero-shot learning, to free this cumbersome curation. For newly-added
relations, we attempt to learn their semantic features from their text
descriptions and hence recognize the facts of unseen relations with no examples
being seen. For this purpose, we leverage Generative Adversarial Networks
(GANs) to establish the connection between text and knowledge graph domain: The
generator learns to generate the reasonable relation embeddings merely with
noisy text descriptions. Under this setting, zero-shot learning is naturally
converted to a traditional supervised classification task. Empirically, our
method is model-agnostic that could be potentially applied to any version of KG
embeddings, and consistently yields performance improvements on NELL and Wiki
dataset
Generalized Relation Learning with Semantic Correlation Awareness for Link Prediction
Developing link prediction models to automatically complete knowledge graphs
has recently been the focus of significant research interest. The current
methods for the link prediction taskhavetwonaturalproblems:1)the relation
distributions in KGs are usually unbalanced, and 2) there are many unseen
relations that occur in practical situations. These two problems limit the
training effectiveness and practical applications of the existing link
prediction models. We advocate a holistic understanding of KGs and we propose
in this work a unified Generalized Relation Learning framework GRL to address
the above two problems, which can be plugged into existing link prediction
models. GRL conducts a generalized relation learning, which is aware of
semantic correlations between relations that serve as a bridge to connect
semantically similar relations. After training with GRL, the closeness of
semantically similar relations in vector space and the discrimination of
dissimilar relations are improved. We perform comprehensive experiments on six
benchmarks to demonstrate the superior capability of GRL in the link prediction
task. In particular, GRL is found to enhance the existing link prediction
models making them insensitive to unbalanced relation distributions and capable
of learning unseen relations.Comment: Preprint of accepted AAAI2021 pape
Towards Unstructured Knowledge Integration in Natural Language Processing
In the last decades, Artificial Intelligence has witnessed multiple breakthroughs in deep learning. In particular, purely data-driven approaches have opened to a wide variety of successful applications due to the large availability of data. Nonetheless, the integration of prior knowledge is still required to compensate for specific issues like lack of generalization from limited data, fairness, robustness, and biases.
In this thesis, we analyze the methodology of integrating knowledge into deep learning models in the field of Natural Language Processing (NLP). We start by remarking on the importance of knowledge integration. We highlight the possible shortcomings of these approaches and investigate the implications of integrating unstructured textual knowledge.
We introduce Unstructured Knowledge Integration (UKI) as the process of integrating unstructured knowledge into machine learning models. We discuss UKI in the field of NLP, where knowledge is represented in a natural language format. We identify UKI as a complex process comprised of multiple sub-processes, different knowledge types, and knowledge integration properties to guarantee. We remark on the challenges of integrating unstructured textual knowledge and bridge connections with well-known research areas in NLP.
We provide a unified vision of structured knowledge extraction (KE) and UKI by identifying KE as a sub-process of UKI.
We investigate some challenging scenarios where structured knowledge is not a feasible prior assumption and formulate each task from the point of view of UKI. We adopt simple yet effective neural architectures and discuss the challenges of such an approach.
Finally, we identify KE as a form of symbolic representation. From this perspective, we remark on the need of defining sophisticated UKI processes to verify the validity of knowledge integration. To this end, we foresee frameworks capable of combining symbolic and sub-symbolic representations for learning as a solution