44 research outputs found

    Towards Automated Reasoning in Herbrand Structures

    Get PDF
    Herbrand structures have the advantage, computationally speaking, of being guided by the definability of all elements in them. A salient feature of the logics induced by them is that they internally exhibit the induction scheme, thus providing a congenial, computationally-oriented framework for formal inductive reasoning. Nonetheless, their enhanced expressivity renders any effective proof system for them incomplete. Furthermore, the fact that they are not compact poses yet another prooftheoretic challenge. This paper offers several layers for coping with the inherent incompleteness and non-compactness of these logics. First, two types of infinitary proof system are introduced—one of infinite width and one of infinite height—which manipulate infinite sequents and are sound and complete for the intended semantics. The restriction of these systems to finite sequents induces a completeness result for finite entailments. Then, in search of effectiveness, two finite approximations of these systems are presented and explored. Interestingly, the approximation of the infinite-width system via an explicit induction scheme turns out to be weaker than the effective cyclic fragment of the infinite-height system

    Strong Types for Direct Logic

    Get PDF
    This article follows on the introductory article “Direct Logic for Intelligent Applications” [Hewitt 2017a]. Strong Types enable new mathematical theorems to be proved including the Formal Consistency of Mathematics. Also, Strong Types are extremely important in Direct Logic because they block all known paradoxes[Cantini and Bruni 2017]. Blocking known paradoxes makes Direct Logic safer for use in Intelligent Applications by preventing security holes. Inconsistency Robustness is performance of information systems with pervasively inconsistent information. Inconsistency Robustness of the community of professional mathematicians is their performance repeatedly repairing contradictions over the centuries. In the Inconsistency Robustness paradigm, deriving contradictions has been a progressive development and not “game stoppers.” Contradictions can be helpful instead of being something to be “swept under the rug” by denying their existence, which has been repeatedly attempted by authoritarian theoreticians (beginning with some Pythagoreans). Such denial has delayed mathematical development. This article reports how considerations of Inconsistency Robustness have recently influenced the foundations of mathematics for Computer Science continuing a tradition developing the sociological basis for foundations. Mathematics here means the common foundation of all classical mathematical theories from Euclid to the mathematics used to prove Fermat's Last [McLarty 2010]. Direct Logic provides categorical axiomatizations of the Natural Numbers, Real Numbers, Ordinal Numbers, Set Theory, and the Lambda Calculus meaning that up a unique isomorphism there is only one model that satisfies the respective axioms. Good evidence for the consistency Classical Direct Logic derives from how it blocks the known paradoxes of classical mathematics. Humans have spent millennia devising paradoxes for classical mathematics. Having a powerful system like Direct Logic is important in computer science because computers must be able to formalize all logical inferences (including inferences about their own inference processes) without requiring recourse to human intervention. Any inconsistency in Classical Direct Logic would be a potential security hole because it could be used to cause computer systems to adopt invalid conclusions. After [Church 1934], logicians faced the following dilemma: • 1st order theories cannot be powerful lest they fall into inconsistency because of Church’s Paradox. • 2nd order theories contravene the philosophical doctrine that theorems must be computationally enumerable. The above issues can be addressed by requiring Mathematics to be strongly typed using so that: • Mathematics self proves that it is “open” in the sense that theorems are not computationally enumerable. • Mathematics self proves that it is formally consistent. • Strong mathematical theories for Natural Numbers, Ordinals, Set Theory, the Lambda Calculus, Actors, etc. are inferentially decidable, meaning that every true proposition is provable and every proposition is either provable or disprovable. Furthermore, theorems of these theories are not enumerable by a provably total procedure

    Strong Types for Direct Logic

    Get PDF
    This article follows on the introductory article “Direct Logic for Intelligent Applications” [Hewitt 2017a]. Strong Types enable new mathematical theorems to be proved including the Formal Consistency of Mathematics. Also, Strong Types are extremely important in Direct Logic because they block all known paradoxes[Cantini and Bruni 2017]. Blocking known paradoxes makes Direct Logic safer for use in Intelligent Applications by preventing security holes. Inconsistency Robustness is performance of information systems with pervasively inconsistent information. Inconsistency Robustness of the community of professional mathematicians is their performance repeatedly repairing contradictions over the centuries. In the Inconsistency Robustness paradigm, deriving contradictions has been a progressive development and not “game stoppers.” Contradictions can be helpful instead of being something to be “swept under the rug” by denying their existence, which has been repeatedly attempted by authoritarian theoreticians (beginning with some Pythagoreans). Such denial has delayed mathematical development. This article reports how considerations of Inconsistency Robustness have recently influenced the foundations of mathematics for Computer Science continuing a tradition developing the sociological basis for foundations. Mathematics here means the common foundation of all classical mathematical theories from Euclid to the mathematics used to prove Fermat's Last [McLarty 2010]. Direct Logic provides categorical axiomatizations of the Natural Numbers, Real Numbers, Ordinal Numbers, Set Theory, and the Lambda Calculus meaning that up a unique isomorphism there is only one model that satisfies the respective axioms. Good evidence for the consistency Classical Direct Logic derives from how it blocks the known paradoxes of classical mathematics. Humans have spent millennia devising paradoxes for classical mathematics. Having a powerful system like Direct Logic is important in computer science because computers must be able to formalize all logical inferences (including inferences about their own inference processes) without requiring recourse to human intervention. Any inconsistency in Classical Direct Logic would be a potential security hole because it could be used to cause computer systems to adopt invalid conclusions. After [Church 1934], logicians faced the following dilemma: • 1st order theories cannot be powerful lest they fall into inconsistency because of Church’s Paradox. • 2nd order theories contravene the philosophical doctrine that theorems must be computationally enumerable. The above issues can be addressed by requiring Mathematics to be strongly typed using so that: • Mathematics self proves that it is “open” in the sense that theorems are not computationally enumerable. • Mathematics self proves that it is formally consistent. • Strong mathematical theories for Natural Numbers, Ordinals, Set Theory, the Lambda Calculus, Actors, etc. are inferentially decidable, meaning that every true proposition is provable and every proposition is either provable or disprovable. Furthermore, theorems of these theories are not enumerable by a provably total procedure

    A signal failure? The organisation and management of British railways 1948-1964

    Get PDF
    This study offers a reassessment of the organisation and management of British Railways from 1948 to 1964. In examining the impact of the 1948 nationalisation, it considers whether the under-studied alternatives proposed by the railway companies might have been more successful, and whether the Labour government's political imperatives resulted in inadequate preparation for public ownership and modernisation of the transport system. Using an extensive range of government files, including records not available for earlier studies, it argues that the slow process of modernisation was less the consequence of government intervention or financial restrictions, or of general economic conditions, than of deficiencies in railway management - division of authority, weak strategic planning, lack of financial control, ineffective implementation of policies, and inability to alter entrenched attitudes in the workforce and among managers themselves. These management problems resulted in the expensive failure of the 1955 Modernisation Plan. The Conservative government, previously supportive (if with misgivings) of the railway management, now had no option but to impose its own review of the railways systems, leading to the controversial 1964 Beeching Report. The Report and implementation of its recommendations are examined with the purpose of assessing whether Beeching deserves his continuing denigration. The main conclusions are that nationalisation was mishandled, and that thereafter management failings made further government intervention inevitable

    Theory of abstraction

    Get PDF

    Making and breaking families – reading queer reproductions, stratified reproduction and reproductive justice together

    Get PDF
    In February 2016 we convened a workshop at UC Berkeley, Making Families: Transnational Surrogacy, Queer Kinship, and Reproductive Justice. We were seeking to bring into direct conversation three theoretical frameworks that have each transformed scholarship and influenced practice around transnational surrogacy and reproduction: ‘stratified reproduction’, ‘reproductive justice’, and ‘queer reproductions’. Given the different intellectual and activist genealogies of these three fields, our aim in the workshop and in this resulting symposium issue was twofold: firstly, to draw out the explicit and implicit contributions of these three areas to understanding and helping shape the changing landscape of transnational surrogacy and assisted reproductive technology (ART) and secondly, to work through apparent tensions among these three approaches so as to forge intellectual and political solidarities that can strengthen scholarship and influence policy.Wellcome Trust (grant no. 100606 and grant no. 209829/Z/17/Z); European Commission (FP7-PEOPLE-2013-IOF, grant no. 629341); Spanish Ministry of Economy, Competitiveness and Industry (grant no. CSO2015-64551-C3-1-R

    A neural-symbolic system for temporal reasoning with application to model verification and learning

    Get PDF
    The effective integration of knowledge representation, reasoning and learning into a robust computational model is one of the key challenges in Computer Science and Artificial Intelligence. In particular, temporal models have been fundamental in describing the behaviour of Computational and Neural-Symbolic Systems. Furthermore, knowledge acquisition of correct descriptions of the desired system’s behaviour is a complex task in several domains. Several efforts have been directed towards the development of tools that are capable of learning, describing and evolving software models. This thesis contributes to two major areas of Computer Science, namely Artificial Intelligence (AI) and Software Engineering. Under an AI perspective, we present a novel neural-symbolic computational model capable of representing and learning temporal knowledge in recurrent networks. The model works in integrated fashion. It enables the effective representation of temporal knowledge, the adaptation of temporal models to a set of desirable system properties and effective learning from examples, which in turn can lead to symbolic temporal knowledge extraction from the corresponding trained neural networks. The model is sound, from a theoretical standpoint, but is also tested in a number of case studies. An extension to the framework is shown to tackle aspects of verification and adaptation under the SE perspective. As regards verification, we make use of established techniques for model checking, which allow the verification of properties described as temporal models and return counter-examples whenever the properties are not satisfied. Our neural-symbolic framework is then extended to deal with different sources of information. This includes the translation of model descriptions into the neural structure, the evolution of such descriptions by the application of learning of counter examples, and also the learning of new models from simple observation of their behaviour. In summary, we believe the thesis describes a principled methodology for temporal knowledge representation, learning and extraction, shedding new light on predictive temporal models, not only from a theoretical standpoint, but also with respect to a potentially large number of applications in AI, Neural Computation and Software Engineering, where temporal knowledge plays a fundamental role.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
    corecore