5,881 research outputs found

    An ontology for software component matching

    Get PDF
    The Web is likely to be a central platform for software development in the future. We investigate how Semantic Web technologies, in particular ontologies, can be utilised to support software component development in a Web environment. We use description logics, which underlie Semantic Web ontology languages such as DAML+OIL, to develop an ontology for matching requested and provided components. A link between modal logic and description logics will prove invaluable for the provision of reasoning support for component and service behaviour

    Deduction over Mixed-Level Logic Representations for Text Passage Retrieval

    Full text link
    A system is described that uses a mixed-level representation of (part of) meaning of natural language documents (based on standard Horn Clause Logic) and a variable-depth search strategy that distinguishes between the different levels of abstraction in the knowledge representation to locate specific passages in the documents. Mixed-level representations as well as variable-depth search strategies are applicable in fields outside that of NLP.Comment: 8 pages, Proceedings of the Eighth International Conference on Tools with Artificial Intelligence (TAI'96), Los Alamitos C

    Probabilistic abductive logic programming using Dirichlet priors

    Get PDF
    Probabilistic programming is an area of research that aims to develop general inference algorithms for probabilistic models expressed as probabilistic programs whose execution corresponds to inferring the parameters of those models. In this paper, we introduce a probabilistic programming language (PPL) based on abductive logic programming for performing inference in probabilistic models involving categorical distributions with Dirichlet priors. We encode these models as abductive logic programs enriched with probabilistic definitions and queries, and show how to execute and compile them to boolean formulas. Using the latter, we perform generalized inference using one of two proposed Markov Chain Monte Carlo (MCMC) sampling algorithms: an adaptation of uncollapsed Gibbs sampling from related work and a novel collapsed Gibbs sampling (CGS). We show that CGS converges faster than the uncollapsed version on a latent Dirichlet allocation (LDA) task using synthetic data. On similar data, we compare our PPL with LDA-specific algorithms and other PPLs. We find that all methods, except one, perform similarly and that the more expressive the PPL, the slower it is. We illustrate applications of our PPL on real data in two variants of LDA models (Seed and Cluster LDA), and in the repeated insertion model (RIM). In the latter, our PPL yields similar conclusions to inference with EM for Mallows models

    Conceptual thinking in Hegel’s Science of logic

    Get PDF
    Filozofia analityczna po logicyzmie Fregego i atomizmie logicznym Russella odziedziczyła szereg założeń związanych z istnieniem rodzajowej dziedziny bytów indywidualnych, których tożsamość i elementarne określenia już mamy zdefiniowane. Te „indywidua” istnieją tylko w idealnych „światach możliwych” i nie są niczym innym jak zbiorami posiadającymi strukturę bądź czystymi zbiorami matematycznymi. W przeciwieństwie do takich czysto abstrakcyjnych modeli, Hegel analizuje rolę pojęciowych rozróżnień i odpowiednich brakujących inferencji w rzeczywistym świecie. Tutaj wszystkie obiekty są przestrzennie i czasowo skończone. Nawet jeśli rzeczywiste rzeczy poruszają się zgodnie z pewnymi formami, są tylko momentami w całościowym procesie. Wszelako, formy te nie są przedmiotami bezpośredniej, empirycznej obserwacji, lecz zakładają udane i powtarzalne działania i akty mowy. W rezultacie żadna semantyka odnoszącej się do świata referencji nie może obyć się bez kategorii Heglowskich, które wykraczają daleko poza narzędzia opartej wyłącznie na relacjach logiki matematycznej

    Semantics, Modelling, and the Problem of Representation of Meaning -- a Brief Survey of Recent Literature

    Full text link
    Over the past 50 years many have debated what representation should be used to capture the meaning of natural language utterances. Recently new needs of such representations have been raised in research. Here I survey some of the interesting representations suggested to answer for these new needs.Comment: 15 pages, no figure

    Specifying Logic Programs in Controlled Natural Language

    Full text link
    Writing specifications for computer programs is not easy since one has to take into account the disparate conceptual worlds of the application domain and of software development. To bridge this conceptual gap we propose controlled natural language as a declarative and application-specific specification language. Controlled natural language is a subset of natural language that can be accurately and efficiently processed by a computer, but is expressive enough to allow natural usage by non-specialists. Specifications in controlled natural language are automatically translated into Prolog clauses, hence become formal and executable. The translation uses a definite clause grammar (DCG) enhanced by feature structures. Inter-text references of the specification, e.g. anaphora, are resolved with the help of discourse representation theory (DRT). The generated Prolog clauses are added to a knowledge base. We have implemented a prototypical specification system that successfully processes the specification of a simple automated teller machine.Comment: 16 pages, compressed, uuencoded Postscript, published in Proceedings CLNLP 95, COMPULOGNET/ELSNET/EAGLES Workshop on Computational Logic for Natural Language Processing, Edinburgh, April 3-5, 199
    corecore