27 research outputs found

    Goal-Driven Query Answering for Existential Rules with Equality

    Full text link
    Inspired by the magic sets for Datalog, we present a novel goal-driven approach for answering queries over terminating existential rules with equality (aka TGDs and EGDs). Our technique improves the performance of query answering by pruning the consequences that are not relevant for the query. This is challenging in our setting because equalities can potentially affect all predicates in a dataset. We address this problem by combining the existing singularization technique with two new ingredients: an algorithm for identifying the rules relevant to a query and a new magic sets algorithm. We show empirically that our technique can significantly improve the performance of query answering, and that it can mean the difference between answering a query in a few seconds or not being able to process the query at all

    Parallel Neurosymbolic Integration with Concordia

    Full text link
    Parallel neurosymbolic architectures have been applied effectively in NLP by distilling knowledge from a logic theory into a deep model.However, prior art faces several limitations including supporting restricted forms of logic theories and relying on the assumption of independence between the logic and the deep network. We present Concordia, a framework overcoming the limitations of prior art. Concordia is agnostic both to the deep network and the logic theory offering support for a wide range of probabilistic theories. Our framework can support supervised training of both components and unsupervised training of the neural component. Concordia has been successfully applied to tasks beyond NLP and data classification, improving the accuracy of state-of-the-art on collective activity detection, entity linking and recommendation tasks.Comment: Fortieth International Conference on Machine Learning, 16 pages (including appendix

    Principled and Efficient Motif Finding for Structure Learning of Lifted Graphical Models

    Get PDF
    Structure learning is a core problem in AI central to the fields of neuro-symbolic AI and statistical relational learning. It consists in automatically learning a logical theory from data. The basis for structure learning is mining repeating patterns in the data, known as structural motifs. Finding these patterns reduces the exponential search space and therefore guides the learning of formulas. Despite the importance of motif learning, it is still not well understood. We present the first principled approach for mining structural motifs in lifted graphical models, languages that blend first-order logic with probabilistic models, which uses a stochastic process to measure the similarity of entities in the data. Our first contribution is an algorithm, which depends on two intuitive hyperparameters: one controlling the uncertainty in the entity similarity measure, and one controlling the softness of the resulting rules. Our second contribution is a preprocessing step where we perform hierarchical clustering on the data to reduce the search space to the most relevant data. Our third contribution is to introduce an O(n ln n) (in the size of the entities in the data) algorithm for clustering structurally-related data. We evaluate our approach using standard benchmarks and show that we outperform state-of-the-art structure learning approaches by up to 6% in terms of accuracy and up to 80% in terms of runtime.Comment: Submitted to AAAI23. 9 pages. Appendix include

    Beyond the grounding bottleneck: Datalog techniques for inference in probabilistic logic programs

    Get PDF
    State-of-the-art inference approaches in probabilistic logic programming typically start by computing the relevant ground program with respect to the queries of interest, and then use this program for probabilistic inference using knowledge compilation and weighted model counting. We propose an alternative approach that uses efficient Datalog techniques to integrate knowledge compilation with forward reasoning with a non-ground program. This effectively eliminates the grounding bottleneck that so far has prohibited the application of probabilistic logic programming in query answering scenarios over knowledge graphs, while also providing fast approximations on classical benchmarks in the field

    Gradual transition detection using color coherence and other criteria in a video shot meta-segmentation framework

    Full text link
    Shot segmentation provides the basis for almost all high-level video content analysis approaches, validating it as one of the major prerequisites for efficient video semantic analysis, in-dexing and retrieval. The successful detection of both gradual and abrupt transitions is necessary to this end. In this pa-per a new gradual transition detection algorithm is proposed, that is based on novel criteria such as color coherence change that exhibit less sensitivity to local or global motion than pre-viously proposed ones. These criteria, each of which could serve as a standalone gradual transition detection approach, are then combined using a machine learning technique, to result in a meta-segmentation scheme. Besides significantly improved performance, advantage of the proposed scheme is that there is no need for threshold selection, as opposed to what would be the case if any of the proposed features were used by themselves and as is typically the case in the rele-vant literature. Performance evaluation and comparison with four other popular algorithms reveals the effectiveness of the proposed technique. Index Terms — video shot segmentation, gradual transi-tion, color coherence change, meta-segmentation 1

    MapRepair: Mapping and Repairing under Policy Views

    Get PDF
    International audienceMapping design is overwhelming for end users, who have to check at parthe correctness of the mappings and the possible informationdisclosure over the exported source instance. In this demonstration,we focus on the latter problem by proposing a novel practical solutionto ensure that a mapping faithfully complies with a set of privacyrestrictions specified as source policy views. We showcase MapRepair,that guides the user through the tasks of visualizing the results ofthe data exchange process with and without the privacy restrictions.MapRepair leverages formal privacy guarantees and is inherentlydata-independent, i.e. if a set of criteria are satisfied by themapping statement, then it guarantees that both the mapping and theunderlying instances do not leak sensitive information. Furthermore,MapRepair also allows to automatically repair an input mapping w.r.t.a set of policy views in case of information leakage. We build onvarious demonstration scenarios, including synthetic and real-worldinstances and mappings
    corecore