10 research outputs found

    Towards A Question Answering System over Temporal Knowledge Graph Embedding

    Get PDF
    Question Answering (QA) over knowledge graphs is a vital topic within information retrieval. Questions with temporal intent are a special case of questions for QA systems that have received only limited attention so far. In this paper, we study using temporal knowledge graph embeddings (TKGEs) for temporal QA. Firstly, we propose a microservice-based architecture for building temporal QA systems on pre-trained TKGE models. Secondly, we present a Bayesian model average (BMA) ensemble method, where results of several link prediction tasks on separated TKGE models are combined to find better answers. Within the system built using the microservice-based architecture, the experiments on two benchmark datasets show that BMA provides better results than the individual models.</p

    A hybrid method to select morphometric features using tensor completion and F-score rank for gifted children identification

    Get PDF
    Gifted children are able to learn in a more advanced way than others, probably due to neurophysiological differences in the communication efficiency in neural pathways. Topological features contribute to understanding the correlation between the brain structure and intelligence. Despite decades of neuroscience research using MRI, methods based on brain region connectivity patterns are limited by MRI artifacts, which therefore leads to revisiting MRI morphometric features, with the aim of using them to directly identify gifted children instead of using brain connectivity. However, the small, high dimensional morphometric feature dataset with outliers makes the task of finding good classification models challenging. To this end, a hybrid method is proposed that combines tensor completion and feature selection methods to handle outliers and then select the discriminative features. The proposed method can achieve a classification accuracy of 93.1%, higher than other existing algorithms, which is thus suitable for the small MRI datasets with outliers in supervised classification scenarios.Fil: Zhang, Jin. Nankai University; ChinaFil: Feng, Fan. Nankai University; ChinaFil: Han, TianYi. Nankai University; ChinaFil: Duan, Feng. Nankai University; ChinaFil: Sun, Zhe. Riken. Brain Science Institute; JapónFil: Caiafa, César Federico. Provincia de Buenos Aires. Gobernación. Comisión de Investigaciones Científicas. Instituto Argentino de Radioastronomía. Consejo Nacional de Investigaciones Científicas y Técnicas. Centro Científico Tecnológico Conicet - La Plata. Instituto Argentino de Radioastronomía; ArgentinaFil: Solé Casals, Jordi. Central University of Catalonia; Españ

    TKGQA dataset: using question answering to guide and validate the evolution of temporal knowledge graph

    Get PDF
    Temporal knowledge graphs can be used to represent the current state of the world and, as daily events happen, the need to update the temporal knowledge graph, in order to stay consistent with the state of the world, becomes very important. However, there is currently no reliable method to accurately validate the update and evolution of knowledge graphs. There has been a recent development in text summarisation, whereby question answering is used to both guide and fact-check summarisation quality. The exact process can be applied to the temporal knowledge graph update process. To the best of our knowledge, there is currently no dataset that connects temporal knowledge graphs with documents with question–answer pairs. In this paper, we proposed the TKGQA dataset, consisting of over 5000 financial news documents related to M&A. Each document has extracted facts, question–answer pairs, and before and after temporal knowledge graphs, to highlight the state of temporal knowledge and any changes caused by the facts extracted from the document. As we parse through each document, we use question–answering to check and guide the update process of the temporal knowledge graph

    ChronoR: Rotation Based Temporal Knowledge Graph Embedding

    Full text link
    Despite the importance and abundance of temporal knowledge graphs, most of the current research has been focused on reasoning on static graphs. In this paper, we study the challenging problem of inference over temporal knowledge graphs. In particular, the task of temporal link prediction. In general, this is a difficult task due to data non-stationarity, data heterogeneity, and its complex temporal dependencies. We propose Chronological Rotation embedding (ChronoR), a novel model for learning representations for entities, relations, and time. Learning dense representations is frequently used as an efficient and versatile method to perform reasoning on knowledge graphs. The proposed model learns a k-dimensional rotation transformation parametrized by relation and time, such that after each fact's head entity is transformed using the rotation, it falls near its corresponding tail entity. By using high dimensional rotation as its transformation operator, ChronoR captures rich interaction between the temporal and multi-relational characteristics of a Temporal Knowledge Graph. Experimentally, we show that ChronoR is able to outperform many of the state-of-the-art methods on the benchmark datasets for temporal knowledge graph link prediction

    Learning from History: Modeling Temporal Knowledge Graphs with Sequential Copy-Generation Networks

    Full text link
    Large knowledge graphs often grow to store temporal facts that model the dynamic relations or interactions of entities along the timeline. Since such temporal knowledge graphs often suffer from incompleteness, it is important to develop time-aware representation learning models that help to infer the missing temporal facts. While the temporal facts are typically evolving, it is observed that many facts often show a repeated pattern along the timeline, such as economic crises and diplomatic activities. This observation indicates that a model could potentially learn much from the known facts appeared in history. To this end, we propose a new representation learning model for temporal knowledge graphs, namely CyGNet, based on a novel timeaware copy-generation mechanism. CyGNet is not only able to predict future facts from the whole entity vocabulary, but also capable of identifying facts with repetition and accordingly predicting such future facts with reference to the known facts in the past. We evaluate the proposed method on the knowledge graph completion task using five benchmark datasets. Extensive experiments demonstrate the effectiveness of CyGNet for predicting future facts with repetition as well as de novo fact prediction.Comment: AAAI 2021; Updated in accordance with camera read

    Time-varying graph representation learning via higher-order skip-gram with negative sampling

    Get PDF
    Representation learning models for graphs are a successful family of techniques that project nodes into feature spaces that can be exploited by other machine learning algorithms. Since many real-world networks are inherently dynamic, with interactions among nodes changing over time, these techniques can be defined both for static and for time-varying graphs. Here, we show how the skip-gram embedding approach can be generalized to perform implicit tensor factorization on different tensor representations of time-varying graphs. We show that higher-order skip-gram with negative sampling (HOSGNS) is able to disentangle the role of nodes and time, with a small fraction of the number of parameters needed by other approaches. We empirically evaluate our approach using time-resolved face-to-face proximity data, showing that the learned representations outperform state-of-the-art methods when used to solve downstream tasks such as network reconstruction. Good performance on predicting the outcome of dynamical processes such as disease spreading shows the potential of this method to estimate contagion risk, providing early risk awareness based on contact tracing data. Supplementary information: The online version contains supplementary material available at 10.1140/epjds/s13688-022-00344-8

    GTRL: An Entity Group-Aware Temporal Knowledge Graph Representation Learning Method

    Full text link
    Temporal Knowledge Graph (TKG) representation learning embeds entities and event types into a continuous low-dimensional vector space by integrating the temporal information, which is essential for downstream tasks, e.g., event prediction and question answering. Existing methods stack multiple graph convolution layers to model the influence of distant entities, leading to the over-smoothing problem. To alleviate the problem, recent studies infuse reinforcement learning to obtain paths that contribute to modeling the influence of distant entities. However, due to the limited number of hops, these studies fail to capture the correlation between entities that are far apart and even unreachable. To this end, we propose GTRL, an entity Group-aware Temporal knowledge graph Representation Learning method. GTRL is the first work that incorporates the entity group modeling to capture the correlation between entities by stacking only a finite number of layers. Specifically, the entity group mapper is proposed to generate entity groups from entities in a learning way. Based on entity groups, the implicit correlation encoder is introduced to capture implicit correlations between any pairwise entity groups. In addition, the hierarchical GCNs are exploited to accomplish the message aggregation and representation updating on the entity group graph and the entity graph. Finally, GRUs are employed to capture the temporal dependency in TKGs. Extensive experiments on three real-world datasets demonstrate that GTRL achieves the state-of-the-art performances on the event prediction task, outperforming the best baseline by an average of 13.44%, 9.65%, 12.15%, and 15.12% in MRR, Hits@1, Hits@3, and Hits@10, respectively.Comment: Accepted by TKDE, 16 pages, and 9 figure

    HyperQuaternionE:A hyperbolic embedding model for qualitative spatial and temporal reasoning

    Get PDF
    Qualitative spatial/temporal reasoning (QSR/QTR) plays a key role in research on human cognition, e.g., as it relates to navigation, as well as in work on robotics and artificial intelligence. Although previous work has mainly focused on various spatial and temporal calculi, more recently representation learning techniques such as embedding have been applied to reasoning and inference tasks such as query answering and knowledge base completion. These subsymbolic and learnable representations are well suited for handling noise and efficiency problems that plagued prior work. However, applying embedding techniques to spatial and temporal reasoning has received little attention to date. In this paper, we explore two research questions: (1) How do embedding-based methods perform empirically compared to traditional reasoning methods on QSR/QTR problems? (2) If the embedding-based methods are better, what causes this superiority? In order to answer these questions, we first propose a hyperbolic embedding model, called HyperQuaternionE, to capture varying properties of relations (such as symmetry and anti-symmetry), to learn inversion relations and relation compositions (i.e., composition tables), and to model hierarchical structures over entities induced by transitive relations. We conduct various experiments on two synthetic datasets to demonstrate the advantages of our proposed embedding-based method against existing embedding models as well as traditional reasoners with respect to entity inference and relation inference. Additionally, our qualitative analysis reveals that our method is able to learn conceptual neighborhoods implicitly. We conclude that the success of our method is attributed to its ability to model composition tables and learn conceptual neighbors, which are among the core building blocks of QSR/QTR
    corecore