2,901 research outputs found

    Exploiting Term Hiding to Reduce Run-time Checking Overhead

    Full text link
    One of the most attractive features of untyped languages is the flexibility in term creation and manipulation. However, with such power comes the responsibility of ensuring the correctness of these operations. A solution is adding run-time checks to the program via assertions, but this can introduce overheads that are in many cases impractical. While static analysis can greatly reduce such overheads, the gains depend strongly on the quality of the information inferred. Reusable libraries, i.e., library modules that are pre-compiled independently of the client, pose special challenges in this context. We propose a technique which takes advantage of module systems which can hide a selected set of functor symbols to significantly enrich the shape information that can be inferred for reusable libraries, as well as an improved run-time checking approach that leverages the proposed mechanisms to achieve large reductions in overhead, closer to those of static languages, even in the reusable-library context. While the approach is general and system-independent, we present it for concreteness in the context of the Ciao assertion language and combined static/dynamic checking framework. Our method maintains the full expressiveness of the assertion language in this context. In contrast to other approaches it does not introduce the need to switch the language to a (static) type system, which is known to change the semantics in languages like Prolog. We also study the approach experimentally and evaluate the overhead reduction achieved in the run-time checks.Comment: 26 pages, 10 figures, 2 tables; an extension of the paper version accepted to PADL'18 (includes proofs, extra figures and examples omitted due to space reasons

    An overview of decision table literature 1982-1995.

    Get PDF
    This report gives an overview of the literature on decision tables over the past 15 years. As much as possible, for each reference, an author supplied abstract, a number of keywords and a classification are provided. In some cases own comments are added. The purpose of these comments is to show where, how and why decision tables are used. The literature is classified according to application area, theoretical versus practical character, year of publication, country or origin (not necessarily country of publication) and the language of the document. After a description of the scope of the interview, classification results and the classification by topic are presented. The main body of the paper is the ordered list of publications with abstract, classification and comments.

    XMG : eXtending MetaGrammars to MCTAG

    Get PDF
    In this paper, we introduce an extension of the XMG system (eXtensibleMeta-Grammar) in order to allow for the description of Multi-Component Tree Adjoining Grammars. In particular, we introduce the XMG formalism and its implementation, and show how the latter makes it possible to extend the system relatively easily to different target formalisms, thus opening the way towards multi-formalism.Dans cet article, nous présentons une extension du système XMG (eXtensible MetaGrammar) afin de permettre la description de grammaires darbres adjoints à composantes multiples. Nous présentons en particulier le formalisme XMG et son implantation et montrons comment celle-ci permet relativement aisément détendre le système à différents formalismes grammaticaux cibles, ouvrant ainsi la voie au multi-formalisme

    Connectionist Inference Models

    Get PDF
    The performance of symbolic inference tasks has long been a challenge to connectionists. In this paper, we present an extended survey of this area. Existing connectionist inference systems are reviewed, with particular reference to how they perform variable binding and rule-based reasoning, and whether they involve distributed or localist representations. The benefits and disadvantages of different representations and systems are outlined, and conclusions drawn regarding the capabilities of connectionist inference systems when compared with symbolic inference systems or when used for cognitive modeling

    CDAO-Store: Ontology-driven Data Integration for Phylogenetic Analysis

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The <it>Comparative Data Analysis Ontology (CDAO) </it>is an ontology developed, as part of the EvoInfo and EvoIO groups supported by the National Evolutionary Synthesis Center, to provide semantic descriptions of data and transformations commonly found in the domain of phylogenetic analysis. The core concepts of the ontology enable the description of phylogenetic trees and associated character data matrices.</p> <p>Results</p> <p>Using CDAO as the semantic back-end, we developed a triple-store, named <it>CDAO</it>-<it>Store</it>. CDAO-Store is a RDF-based store of phylogenetic data, including a complete import of TreeBASE. CDAO-Store provides a programmatic interface, in the form of web services, and a web-based front-end, to perform both user-defined as well as domain-specific queries; domain-specific queries include search for nearest common ancestors, minimum spanning clades, filter multiple trees in the store by size, author, taxa, tree identifier, algorithm or method. In addition, CDAO-Store provides a visualization front-end, called <it>CDAO</it>-<it>Explorer</it>, which can be used to view both character data matrices and trees extracted from the CDAO-Store. CDAO-Store provides import capabilities, enabling the addition of new data to the triple-store; files in PHYLIP, MEGA, <monospace>nexml</monospace>, and NEXUS formats can be imported and their CDAO representations added to the triple-store.</p> <p>Conclusions</p> <p>CDAO-Store is made up of a versatile and integrated set of tools to support phylogenetic analysis. To the best of our knowledge, CDAO-Store is the first semantically-aware repository of phylogenetic data with domain-specific querying capabilities. The portal to CDAO-Store is available at <url>http://www.cs.nmsu.edu/~cdaostore</url>.</p

    Computing Preferred Answer Sets by Meta-Interpretation in Answer Set Programming

    Full text link
    Most recently, Answer Set Programming (ASP) is attracting interest as a new paradigm for problem solving. An important aspect which needs to be supported is the handling of preferences between rules, for which several approaches have been presented. In this paper, we consider the problem of implementing preference handling approaches by means of meta-interpreters in Answer Set Programming. In particular, we consider the preferred answer set approaches by Brewka and Eiter, by Delgrande, Schaub and Tompits, and by Wang, Zhou and Lin. We present suitable meta-interpreters for these semantics using DLV, which is an efficient engine for ASP. Moreover, we also present a meta-interpreter for the weakly preferred answer set approach by Brewka and Eiter, which uses the weak constraint feature of DLV as a tool for expressing and solving an underlying optimization problem. We also consider advanced meta-interpreters, which make use of graph-based characterizations and often allow for more efficient computations. Our approach shows the suitability of ASP in general and of DLV in particular for fast prototyping. This can be fruitfully exploited for experimenting with new languages and knowledge-representation formalisms.Comment: 34 pages, appeared as a Technical Report at KBS of the Vienna University of Technology, see http://www.kr.tuwien.ac.at/research/reports

    Perspectives in deductive databases

    Get PDF
    AbstractI discuss my experiences, some of the work that I have done, and related work that influenced me, concerning deductive databases, over the last 30 years. I divide this time period into three roughly equal parts: 1957–1968, 1969–1978, 1979–present. For the first I describe how my interest started in deductive databases in 1957, at a time when the field of databases did not even exist. I describe work in the beginning years, leading to the start of deductive databases about 1968 with the work of Cordell Green and Bertram Raphael. The second period saw a great deal of work in theorem providing as well as the introduction of logic programming. The existence and importance of deductive databases as a formal and viable discipline received its impetus at a workshop held in Toulouse, France, in 1977, which culminated in the book Logic and Data Bases. The relationship of deductive databases and logic programming was recognized at that time. During the third period we have seen formal theories of databases come about as an outgrowth of that work, and the recognition that artificial intelligence and deductive databases are closely related, at least through the so-called expert database systems. I expect that the relationships between techniques from formal logic, databases, logic programming, and artificial intelligence will continue to be explored and the field of deductive databases will become a more prominent area of computer science in coming years
    corecore