39 research outputs found

    The Challenge of Unifying Semantic and Syntactic Inference Restrictions

    No full text
    While syntactic inference restrictions don't play an important role for SAT, they are an essential reasoning technique for more expressive logics, such as first-order logic, or fragments thereof. In particular, they can result in short proofs or model representations. On the other hand, semantically guided inference systems enjoy important properties, such as the generation of solely non-redundant clauses. I discuss to what extend the two paradigms may be unifiable

    Efficient Sketching Algorithm for Sparse Binary Data

    Full text link
    Recent advancement of the WWW, IOT, social network, e-commerce, etc. have generated a large volume of data. These datasets are mostly represented by high dimensional and sparse datasets. Many fundamental subroutines of common data analytic tasks such as clustering, classification, ranking, nearest neighbour search, etc. scale poorly with the dimension of the dataset. In this work, we address this problem and propose a sketching (alternatively, dimensionality reduction) algorithm -- \binsketch (Binary Data Sketch) -- for sparse binary datasets. \binsketch preserves the binary version of the dataset after sketching and maintains estimates for multiple similarity measures such as Jaccard, Cosine, Inner-Product similarities, and Hamming distance, on the same sketch. We present a theoretical analysis of our algorithm and complement it with extensive experimentation on several real-world datasets. We compare the performance of our algorithm with the state-of-the-art algorithms on the task of mean-square-error and ranking. Our proposed algorithm offers a comparable accuracy while suggesting a significant speedup in the dimensionality reduction time, with respect to the other candidate algorithms. Our proposal is simple, easy to implement, and therefore can be adopted in practice

    280 Birds with One Stone: Inducing Multilingual Taxonomies from Wikipedia using Character-level Classification

    Get PDF
    We propose a simple, yet effective, approach towards inducing multilingual taxonomies from Wikipedia. Given an English taxonomy, our approach leverages the interlanguage links of Wikipedia followed by character-level classifiers to induce high-precision, high-coverage taxonomies in other languages. Through experiments, we demonstrate that our approach significantly outperforms the state-of-the-art, heuristics-heavy approaches for six languages. As a consequence of our work, we release presumably the largest and the most accurate multilingual taxonomic resource spanning over 280 languages

    The Challenge of Unifying Semantic and Syntactic Inference Restrictions

    Get PDF
    International audienceWhile syntactic inference restrictions don't play an important role for SAT, they are an essential reasoning technique for more expressive logics, such as first-order logic, or fragments thereof. In particular, they can result in short proofs or model representations. On the other hand, semantically guided inference systems enjoy important properties, such as the generation of solely non-redundant clauses. I discuss to what extend the two paradigms may be unifiable

    Taxonomy Induction using Hypernym Subsequences

    Get PDF
    We propose a novel, semi-supervised approach towards domain taxonomy induction from an input vocabulary of seed terms. Unlike all previous approaches, which typically extract direct hypernym edges for terms, our approach utilizes a novel probabilistic framework to extract hypernym subsequences. Taxonomy induction from extracted subsequences is cast as an instance of the minimumcost flow problem on a carefully designed directed graph. Through experiments, we demonstrate that our approach outperforms stateof- the-art taxonomy induction approaches across four languages. Importantly, we also show that our approach is robust to the presence of noise in the input vocabulary. To the best of our knowledge, no previous approaches have been empirically proven to manifest noise-robustness in the input vocabulary
    corecore