6 research outputs found

    Proceedings of the Workshop on Linear Logic and Logic Programming

    Get PDF
    Declarative programming languages often fail to effectively address many aspects of control and resource management. Linear logic provides a framework for increasing the strength of declarative programming languages to embrace these aspects. Linear logic has been used to provide new analyses of Prolog\u27s operational semantics, including left-to-right/depth-first search and negation-as-failure. It has also been used to design new logic programming languages for handling concurrency and for viewing program clauses as (possibly) limited resources. Such logic programming languages have proved useful in areas such as databases, object-oriented programming, theorem proving, and natural language parsing. This workshop is intended to bring together researchers involved in all aspects of relating linear logic and logic programming. The proceedings includes two high-level overviews of linear logic, and six contributed papers. Workshop organizers: Jean-Yves Girard (CNRS and University of Paris VII), Dale Miller (chair, University of Pennsylvania, Philadelphia), and Remo Pareschi, (ECRC, Munich)

    An analysis and implementation of linear derivation strategies

    Get PDF
    This study examines the efficacy of six linear derivation strategies: (i) s-linear resolution, (ii) the ME procedure; (iii) t-linear resolution, (iv) SL -resolution, (v) the GC procedure, and (vi) SLM. The analysis is focused on the different restrictions and operations employed in each derivation strategy. The selection function, restrictive ancestor resolution, compulsory ancestor resolution on literals having atoms which are or become identical, compulsory merging operations, reuse of truncated literals, spreading of FALSE literals, no-tautologies resection, no two non-B-literals having identical atoms restriction, and the use of semantic information to trim irrelevant derivations from the search tree are the major features found In these six derivation strategies. Detecting loops and minimizing irrelevant derivations are the identified weak points of SLM. Two variations of SLM are suggested to rectify these problems. The ME procedure, SL-resolution, the GC procedure, SLM and one of the suggested variations of SLM were implemented using the Arity/Prolog compiler to produce the ME -TP, SL-TP, GC-TP, SLM-TP and SLM5-TP theorem provers respectively. In addition to the original features of each derivation strategy, the following search strategies were included in the implementations : the modified consecutively bounded depth-first search unit preference strategy, set of support strategy, pure literal elimination, tautologous clause elimination, selection function based on the computed weight of a literal, and a match check. The extension operation used by each theorem prover was extended to include subsumed unit extension and paramodulation. The performance of each theorem prover was determined. Experimental results were obtained using twenty four selected problems. The performance was measured in terms of the memory use and the execution time. A comparison of results between the five theorem provers using the, ME-TP as the basis was done. The results show that none of the theorem provers, consistently perform better than the others. Two of the selected problems were not proved by SL-TP and one problem was not proved by SLM-TP due to memory problems. The ME-TP, GC-TP and SLM5-TP proved all the selected problems. In some problems, the ME-TP and GC-TP performed better than SLM5-TP. However, the ME-TP and GC-TP had difficulties in some problems in which SLM5-TP performed well

    A connectionist representation of first-order formulae with dynamic variable binding

    Get PDF
    The relationship between symbolicism and connectionism has been one of the major issues in recent Artificial Intelligence research. An increasing number of researchers from each side have tried to adopt desirable characteristics of the other. These efforts have produced a number of different strategies for interfacing connectionist and sym¬ bolic AI. One of them is connectionist symbol processing which attempts to replicate symbol processing functionalities using connectionist components.In this direction, this thesis develops a connectionist inference architecture which per¬ forms standard symbolic inference on a subclass of first-order predicate calculus. Our primary interest is in understanding how formulas which are described in a limited form of first-order predicate calculus may be implemented using a connectionist archi¬ tecture. Our chosen knowledge representation scheme is a subset of first-order Horn clause expressions which is a set of universally quantified expressions in first-order predicate calculus. As a focus of attention we are developing techniques for compiling first-order Horn clause expressions into a connectionist network. This offers practical benefits but also forces limitations on the scope of the compiled system, since we tire, in fact, merging an interpreter into the connectionist networks. The compilation process has to take into account not only first-order Horn clause expressions themselves but also the strategy which we intend to use for drawing inferences from them. Thus, this thesis explores the extent to which this type of a translation can build a connectionist inference model to accommodate desired symbolic inference.This work first involves constructing efficient connectionist mechanisms to represent basic symbol components, dynamic bindings, basic symbolic inference procedures, and devising a set of algorithms which automatically translates input descriptions to neural networks using the above connectionist mechanisms. These connectionist mechanisms are built by taking an existing temporal synchrony mechanism and extending it further to obtain desirable features to represent and manipulate basic symbol structures. The existing synchrony mechanism represents dynamic bindings very efficiently using tem¬ poral synchronous activity between neuron elements but it has fundamental limitations in supporting standard symbolic inference. The extension addresses these limitations.The ability of the connectionist inference model was tested using various types of first order Horn clause expressions. The results showed that the proposed connectionist in¬ ference model was able to encode significant sets of first order Horn clause expressions and replicated basic symbolic styles of inference in a connectionist manner. The system successfully demonstrated not only forward chaining but also backward chaining over the networks encoding the input expressions. The results, however, also showed that implementing a connectionist mechanism for full unification among groups of unifying arguments in rules, are encoding some types of rules, is difficult to achieve in a con¬ nectionist manner needs additional mechanisms. In addition, some difficult issues such as encoding rules having recursive definitions remained untouched

    Enhance DBMS capabilities using semantic data modelling approach.

    Get PDF
    by Yip Wai Man.Thesis (M.Phil.)--Chinese University of Hong Kong, 1990.Bibliography: leaves 132-135.ABSTRACTACKNOWLEDGEMENTSPART IChapter 1 --- OVERVIEW ON SEMANTIC DATA MODELLING APPROACH … --- p.1Chapter 2 --- SCOPE OF RESEARCH --- p.4Chapter 3 --- CONCEPTUAL STRUCTURE OF SAM* --- p.7Chapter 3.1 --- Concepts and Associations --- p.7Chapter 3.1.1 --- Membership Association --- p.8Chapter 3.1.2 --- Aggregation Association --- p.8Chapter 3.1.3 --- Generalization Association --- p.9Chapter 3.1.4 --- Interaction Association --- p.10Chapter 3.1.5 --- Composition Association --- p.11Chapter 3.1.6 --- Cross-Product Association --- p.12Chapter 3.1.7 --- Summary Association --- p.13Chapter 3.2 --- An Example --- p.14Chapter 3.3 --- Occurrences --- p.15PART IIChapter 4 --- SYSTEM OVERVIEW --- p.17Chapter 4.1 --- System Objectives --- p.17Chapter 4.1.1 --- Data Level --- p.17Chapter 4.1.2 --- Meta-Data Level --- p.18Chapter 4.2 --- System Characteristics --- p.19Chapter 4.3 --- Design Considerations --- p.20Chapter 5 --- IMPLEMENTATION CONSIDERATIONS --- p.23Chapter 5.1 --- Introduction --- p.23Chapter 5.2 --- Data Definition Language for Schema --- p.24Chapter 5.3 --- Construction of Directed Acyclic Graph --- p.27Chapter 5.4 --- Query Manipulation Language --- p.28Chapter 5.4.1 --- Semantic Manipulation Language --- p.29Chapter 5.4.1.1 --- Locate Concepts --- p.30Chapter 5.4.1.2 --- Retrieve Information About Concepts --- p.30Chapter 5.4.1.3 --- Find a Path Between Two Concepts --- p.31Chapter 5.4.2 --- Occurrence Manipulation Language --- p.32Chapter 5.5 --- Examples --- p.35Chapter 6 --- RESULTS AND DISCUSSIONS --- p.41Chapter 6.1 --- Allow Non-Homogeneity of Facts about Entities --- p.41Chapter 6.2 --- Field Name is Information --- p.42Chapter 6.3 --- Description of Group of Information --- p.43Chapter 6.4 --- Explicitly Description of Interaction --- p.43Chapter 6.5 --- Information about Entities --- p.44Chapter 6.6 --- Automatically Joining Tables --- p.45Chapter 6.7 --- Automatically Union Tables --- p.45Chapter 6.8 --- Automatically Select Tables --- p.46Chapter 6.9 --- Ambiguity --- p.47Chapter 6.10 --- Normalization --- p.47Chapter 6.11 --- Update --- p.50PART IIIChapter 7 --- SCHEMA VERIFICATION --- p.55Chapter 7.1 --- Introduction --- p.55Chapter 7.2 --- Need of Schema Verification --- p.57Chapter 7.3 --- Integrity Constraint Handling Vs Schema Verification --- p.58Chapter 8 --- AUTOMATIC THEOREM PROVING --- p.60Chapter 8.1 --- Overview --- p.60Chapter 8.2 --- A Discussion on Some Automatic Theorem Proving Methods --- p.61Chapter 8.2.1 --- Resolution --- p.61Chapter 8.2.2 --- Natural Deduction --- p.63Chapter 8.2.3 --- Tableau Proof Methods --- p.65Chapter 8.2.4 --- Connection Method --- p.67Chapter 8.3 --- Comparison of Automatic Theorem Proving Methods --- p.70Chapter 8.3.1 --- Proof Procedure --- p.70Chapter 8.3.2 --- Overhead --- p.70Chapter 8.3.3 --- Unification --- p.71Chapter 8.3.4 --- Heuristics --- p.72Chapter 8.3.5 --- Getting Lost --- p.73Chapter 8.4 --- The Choice of Tool for Schema Verification --- p.73Chapter 9 --- IMPROVEMENT OF CONNECTION METHOD --- p.77Chapter 9.1 --- Motivation of Improving Connection Method --- p.77Chapter 9.2 --- Redundancy Handled by the Original Algorithm --- p.78Chapter 9.3 --- Design Philosophy of the Improved Version --- p.82Chapter 9.4 --- Primary Connection Method Algorithm --- p.83Chapter 9.5 --- AND/OR Connection Graph --- p.89Chapter 9.6 --- Graph Traversal Procedure --- p.91Chapter 9.7 --- Elimination Redundancy Using AND/OR Connection Graph --- p.94Chapter 9.8 --- Further Improvement on Graph Traversal --- p.96Chapter 9.9 --- Comparison with Original Connection Method Algorithm --- p.97Chapter 9.10 --- Application of Connection Method to Schema Verification --- p.98Chapter 9.10.1 --- Express Constraint in Well Formed Formula --- p.98Chapter 9.10.2 --- Convert Formula into Negation Normal Form --- p.101Chapter 9.10.3 --- Verification --- p.101PART IVChapter 10 --- FURTHER DEVELOPMENT --- p.103Chapter 10.1 --- Intelligent Front-End --- p.103Chapter 10.2 --- On Connection Method --- p.104Chapter 10.3 --- Many-Sorted Calculus --- p.104Chapter 11 --- CONCLUSION --- p.107APPENDICESChapter A --- COMPARISON OF SEMANTIC DATA MODELS --- p.110Chapter B --- CONSTRUCTION OP OCCURRENCES --- p.111Chapter C --- SYNTAX OF DDL FOR THE SCHEMA --- p.113Chapter D --- SYNTAX OF SEMANTIC MANIPULATION LANGUAGE --- p.116Chapter E --- TESTING SCHEMA FOR FUND INVESTMENT DBMS --- p.118Chapter F --- TESTING SCHEMA FOR STOCK INVESTMENT DBMS --- p.121Chapter G --- CONNECTION METHOD --- p.124Chapter H --- COMPARISON BETWEEN RESOLUTION AND CONNECTION METHOD --- p.128REFERENCES --- p.13
    corecore