741 research outputs found

    Time-Aware Probabilistic Knowledge Graphs

    Get PDF
    The emergence of open information extraction as a tool for constructing and expanding knowledge graphs has aided the growth of temporal data, for instance, YAGO, NELL and Wikidata. While YAGO and Wikidata maintain the valid time of facts, NELL records the time point at which a fact is retrieved from some Web corpora. Collectively, these knowledge graphs (KG) store facts extracted from Wikipedia and other sources. Due to the imprecise nature of the extraction tools that are used to build and expand KG, such as NELL, the facts in the KG are weighted (a confidence value representing the correctness of a fact). Additionally, NELL can be considered as a transaction time KG because every fact is associated with extraction date. On the other hand, YAGO and Wikidata use the valid time model because they maintain facts together with their validity time (temporal scope). In this paper, we propose a bitemporal model (that combines transaction and valid time models) for maintaining and querying bitemporal probabilistic knowledge graphs. We study coalescing and scalability of marginal and MAP inference. Moreover, we show that complexity of reasoning tasks in atemporal probabilistic KG carry over to the bitemporal setting. Finally, we report our evaluation results of the proposed model

    Towards Log-Linear Logics with Concrete Domains

    Full text link
    We present MEL++\mathcal{MEL}^{++} (M denotes Markov logic networks) an extension of the log-linear description logics EL++\mathcal{EL}^{++}-LL with concrete domains, nominals, and instances. We use Markov logic networks (MLNs) in order to find the most probable, classified and coherent EL++\mathcal{EL}^{++} ontology from an MEL++\mathcal{MEL}^{++} knowledge base. In particular, we develop a novel way to deal with concrete domains (also known as datatypes) by extending MLN's cutting plane inference (CPI) algorithm.Comment: StarAI201

    Political Text Scaling Meets Computational Semantics

    Full text link
    During the last fifteen years, automatic text scaling has become one of the key tools of the Text as Data community in political science. Prominent text scaling algorithms, however, rely on the assumption that latent positions can be captured just by leveraging the information about word frequencies in documents under study. We challenge this traditional view and present a new, semantically aware text scaling algorithm, SemScale, which combines recent developments in the area of computational linguistics with unsupervised graph-based clustering. We conduct an extensive quantitative analysis over a collection of speeches from the European Parliament in five different languages and from two different legislative terms, and show that a scaling approach relying on semantic document representations is often better at capturing known underlying political dimensions than the established frequency-based (i.e., symbolic) scaling method. We further validate our findings through a series of experiments focused on text preprocessing and feature selection, document representation, scaling of party manifestos, and a supervised extension of our algorithm. To catalyze further research on this new branch of text scaling methods, we release a Python implementation of SemScale with all included data sets and evaluation procedures.Comment: Updated version - accepted for Transactions on Data Science (TDS

    Reasoning and Change Management in Modular Ontologies

    Get PDF
    The benefits of modular representations are well known from many areas of computer science. In this paper, we concentrate on the benefits of modular ontologies with respect to local containment of terminological reasoning. We define an architecture for modular ontologies that supports local reasoning by compiling implied subsumption relations. We further address the problem of guaranteeing the integrity of a modular ontology in the presence of local changes. We propose a strategy for analyzing changes and guiding the process of updating compiled information

    Ontological Engineering for the Cadastral Domain

    Get PDF

    A purely logic-based approach to approximate matching of Semantic Web Services

    Full text link
    Most current approaches to matchmaking of semantic Web services utilize hybrid strategies consisting of logic- and non-logic-based similarity measures (or even no logic-based similarity at all). This is mainly due to pure logic-based matchers achieving a good precision, but very low recall values. We present a purely logic-based matcher implementation based on approximate subsumption and extend this approach to take additional information about the taxonomy of the background ontology into account. Our aim is to provide a purely logic-based matchmaker implementation, which also achieves reasonable recall levels without large impact on precision

    Designing an AI-enabled Bundling Generator in an Automotive Case Study

    Get PDF
    Procurement and marketing are the main boundary-spanning functions of an organization. Some studies highlight that procurement is less likely to benefit from artificial intelligence emphasizing its potential in other functions, i.e., in marketing. A case study in the automotive industry of the bundling problem utilizing the design science approach is conducted from the perspective of the buying organization contributing to theory and practice. We rely on information processing theory to create a practical tool that is augmenting the skills of expert buyers through a recommendation engine to make better decisions in a novel way to further save costs. Thereby, we are adding to the literature on spend analysis that has mainly been looking backward using historical data of purchasing orders and invoices to infer saving potentials in the future – our study supplements this approach with forward-looking planning data with inherent challenges of precision and information-richness
    corecore