286 research outputs found

    Proof-graphs for Minimal Implicational Logic

    Full text link
    It is well-known that the size of propositional classical proofs can be huge. Proof theoretical studies discovered exponential gaps between normal or cut free proofs and their respective non-normal proofs. The aim of this work is to study how to reduce the weight of propositional deductions. We present the formalism of proof-graphs for purely implicational logic, which are graphs of a specific shape that are intended to capture the logical structure of a deduction. The advantage of this formalism is that formulas can be shared in the reduced proof. In the present paper we give a precise definition of proof-graphs for the minimal implicational logic, together with a normalization procedure for these proof-graphs. In contrast to standard tree-like formalisms, our normalization does not increase the number of nodes, when applied to the corresponding minimal proof-graph representations.Comment: In Proceedings DCM 2013, arXiv:1403.768

    Width and size of regular resolution proofs

    Full text link
    This paper discusses the topic of the minimum width of a regular resolution refutation of a set of clauses. The main result shows that there are examples having small regular resolution refutations, for which any regular refutation must contain a large clause. This forms a contrast with corresponding results for general resolution refutations.Comment: The article was reformatted using the style file for Logical Methods in Computer Scienc

    The Systematic Method for Constructing the IDEA BANK Based on the EBL

    Get PDF
    This paper describes a method to construct IDEA BANK automatically. IDEA BANK is the data base of the "function-structure module" which is utilized in systematic conceptual design from Value Engineering perspectives. The method based on the Machine Learning EBL technique was evaluated and implemented for the IDEA BANK using SUN workstation. The practical implementation of the IDEA BANK acquisition was discussed after elucidating the problem and solution of the EBL technique in engineering design. In the IDEA BANK system, the structural features of an existing article are analyzed by hierarchically organized domain specific knowledge to yield a systematic explanation of how they function and attain their design goals. The explanation resulted in a generalized version of the Functional Diagram used in Value Engineering from which "function-structure module" can be extracted systematically

    On the Mutual Definability of the Notions of Entailment, Rejection, and Inconsistency

    Get PDF
    In this paper, two axiomatic theories T− and T′ are constructed, which are dual to Tarski’s theory T+ (1930) of deductive systems based on classical propositional calculus. While in Tarski’s theory T+ the primitive notion is the classical consequence function (entailment) Cn+, in the dual theory T− it is replaced by the notion of Słupecki’s rejection consequence Cn− and in the dual theory T′ it is replaced by the notion of the family Incons of inconsistent sets. The author has proved that the theories T+, T−, and T′ are equivalent

    The development of subordinate clauses in German and Swedish as L2s: a theoretical and methodological comparison

    Get PDF
    In this article, we aim to contribute to the debate about the use of subordination as a measure of language proficiency. We compare two theories of SLA—specifically, processability theory (PT; Pienemann, 1998) and dynamic systems theory (de Bot, Lowie, & Verspoor, 2007)—and, more particularly, their addressing of the development of subordinate clauses. Although DST uses measures from the complexity, accuracy, and fluency (CAF) research tradition (see Housen & Kuiken, 2009), PT uses the emergence criterion to describe language development. We will focus on the development of subordinate clauses and compare how subordination as such is acquired and how the processing procedures related to a specific subordinate clause word order is acquired in the interlanguage (IL) of second language German and Swedish learners. The learners’ language use shows that the use of subordination (as measured by a subordination ratio) fluctuates extensively. From the beginning of data collection, all learners use subordinate clauses, but their use of subordinate clauses does not increase linearly over time, which is expected by DST. When focusing on processability and the emergence of subordinate clause word order, however, a clear linear developmental sequence can be observed, revealing a clear difference between the nonacquisition and the acquisition of the subordinate clause word order rules. Our learner data additionally reveal a different behavior regarding lexical and auxiliary or modal verbs

    Language typology in the UNITYP model : paper presented for the XIV. International Congress of Linguists, August 1987, Berlin, DDR, Plenary Session on Typology

    Get PDF
    The aim of this contribution is to embed the question of an antinomy between "integral" vs. "partial typology", inscribed as the topic of this plenary session, into the comprehensive framework of the dimensional model of the research group on language universals and typology (UNITYP). In this introductory section I shall evoke some cardinal points in the theory of linguistic typology, as viewed "from outside", viz. on the basis of striking parallelisms with psychological typology. Section 2 will permit a brief look on the dimensional model of UNITYP. In section 3 I shall present an illustration of a typological treatment on the basis of one particular dimension. In section 4 I shall draw some conclusions with special reference to the "integral vs. partial" antinomy

    A definition of redundancy in relational databases

    Get PDF
    The relational data model as proposed by Codd is a well-established method for data abstraction. Two essential aspects in this model are the definition of the data structure via the relation scheme and the data semantics via data dependencies. Various classes of data dependencies have been studied in the past. In the presence of data dependencies "update dependencies" (or anomalies) and "redundancy" may occur as first observed by Codd. Normal forms have been proposed as a means to control update anomalies and redundancy. But as the notion of redundancy has never been formally defined, one cannot make any precise statement concerning the presence or absence of redundancy for a given design. In this paper we attempt to provide a formal definition of the notion of redundancy for the case of a single relation respectively relation scheme. We first give a static semantic definition of redundancy and then present an operational analogue. Intuitively speaking a relation r contains redundancy, if some "part" of the information given in r can be "determined" from the "rest" of r. And a relation scheme with a given set of data dependencies admits redundancy if there is a relation belonging to this scheme that contains redundancy. The paper is organized in six sections. Section 1 contains the definition of the relational model that we use. We make use of partial "relations" that are built from constants and variables. In section 2 we present the semantic definition of redundancy. Section 3 introduces a class of data dependencies, i.e. implicational dependencies and a chase procedure for partial relations. Section 4 gives an operational characterization of redundancy. The main theorem in this section is theorem 4.3. It states that a relation r in a class of relations sat(D) contains redundancy if there exists a partial relation q that "contains less information" than rand for which chase D(q
    • …
    corecore