24 research outputs found
Foundations of Software Science and Computation Structures
This open access book constitutes the proceedings of the 23rd International Conference on Foundations of Software Science and Computational Structures, FOSSACS 2020, which took place in Dublin, Ireland, in April 2020, and was held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020. The 31 regular papers presented in this volume were carefully reviewed and selected from 98 submissions. The papers cover topics such as categorical models and logics; language theory, automata, and games; modal, spatial, and temporal logics; type theory and proof theory; concurrency theory and process calculi; rewriting theory; semantics of programming languages; program analysis, correctness, transformation, and verification; logics of programming; software specification and refinement; models of concurrent, reactive, stochastic, distributed, hybrid, and mobile systems; emerging models of computation; logical aspects of computational complexity; models of software security; and logical foundations of data bases.
Foundations of Software Science and Computation Structures
This open access book constitutes the proceedings of the 23rd International Conference on Foundations of Software Science and Computational Structures, FOSSACS 2020, which took place in Dublin, Ireland, in April 2020, and was held as Part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2020. The 31 regular papers presented in this volume were carefully reviewed and selected from 98 submissions. The papers cover topics such as categorical models and logics; language theory, automata, and games; modal, spatial, and temporal logics; type theory and proof theory; concurrency theory and process calculi; rewriting theory; semantics of programming languages; program analysis, correctness, transformation, and verification; logics of programming; software specification and refinement; models of concurrent, reactive, stochastic, distributed, hybrid, and mobile systems; emerging models of computation; logical aspects of computational complexity; models of software security; and logical foundations of data bases.
Programming Languages and Systems
This open access book constitutes the proceedings of the 31st European Symposium on Programming, ESOP 2022, which was held during April 5-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 21 regular papers presented in this volume were carefully reviewed and selected from 64 submissions. They deal with fundamental issues in the specification, design, analysis, and implementation of programming languages and systems
Programming Languages and Systems
This open access book constitutes the proceedings of the 31st European Symposium on Programming, ESOP 2022, which was held during April 5-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 21 regular papers presented in this volume were carefully reviewed and selected from 64 submissions. They deal with fundamental issues in the specification, design, analysis, and implementation of programming languages and systems
Recommended from our members
Inventing Intelligence: On the History of Complex Information Processing and Artificial Intelligence in the United States in the Mid-Twentieth Century
In the mid-1950s, researchers in the United States melded formal theories of problem solving and intelligence with another powerful new tool for control: the electronic digital computer. Several branches of western mathematical science emerged from this nexus, including computer science (1960s–), data science (1990s–) and artificial intelligence (AI). This thesis offers an account of the origins and politics of AI in the mid-twentieth century United States, which focuses on its imbrications in systems of societal control. In an effort to denaturalize the power relations upon which the field came into being, I situate AI’s canonical origin story in relation to the structural and intellectual priorities of the U.S. military and American industry during the Cold War, circa 1952 to 1961.
This thesis offers a detailed and comparative account of the early careers, research interests, and key outputs of four researchers often credited with laying the foundations for AI and machine learning—Herbert A. Simon, Frank Rosenblatt, John McCarthy and Marvin Minsky. It chronicles the distinct ways in which each sought to formalise and simulate human mental behaviour using digital electronic computers. Rather than assess their contributions as discontinuous with what came before, as in mythologies of AI's genesis, I establish continuities with, and borrowings from, management science and operations research (Simon), Hayekian economics and instrumentalist statistics (Rosenblatt), automatic coding techniques and pedagogy (McCarthy), and cybernetics (Minsky), along with the broadscale mobilization of Cold War-era civilian-led military science generally.
I assess how Minsky’s 1961 paper 'Steps Toward Artificial Intelligence' simultaneously consolidated and obscured these entanglements as it set in motion an initial research agenda for AI in the following two decades. I argue that mind-computer metaphors, and research in complex information processing generally, played an important role in normalizing the small- and large-scale structuring of social behaviour using mathematics in the United States from the second half of the twentieth century onward
Pseudo-contractions as Gentle Repairs
Updating a knowledge base to remove an unwanted consequence is a challenging task. Some of the original sentences must be either deleted or weakened in such a way that the sentence to be removed is no longer entailed by the resulting set. On the other hand, it is desirable that the existing knowledge be preserved as much as possible, minimising the loss of information. Several approaches to this problem can be found in the literature. In particular, when the knowledge is represented by an ontology, two different families of frameworks have been developed in the literature in the past decades with numerous ideas in common but with little interaction between the communities: applications of AGM-like Belief Change and justification-based Ontology Repair. In this paper, we investigate the relationship between pseudo-contraction operations and gentle repairs. Both aim to avoid the complete deletion of sentences when replacing them with weaker versions is enough to prevent the entailment of the unwanted formula. We show the correspondence between concepts on both sides and investigate under which conditions they are equivalent. Furthermore, we propose a unified notation for the two approaches, which might contribute to the integration of the two areas
The Significance of Evidence-based Reasoning for Mathematics, Mathematics Education, Philosophy and the Natural Sciences
In this multi-disciplinary investigation we show how an evidence-based perspective of quantification---in terms of algorithmic verifiability and algorithmic computability---admits evidence-based definitions of well-definedness and effective computability, which yield two unarguably constructive interpretations of the first-order Peano Arithmetic PA---over the structure N of the natural numbers---that are complementary, not contradictory. The first yields the weak, standard, interpretation of PA over N, which is well-defined with respect to assignments of algorithmically verifiable Tarskian truth values to the formulas of PA under the interpretation. The second yields a strong, finitary, interpretation of PA over N, which is well-defined with respect to assignments of algorithmically computable Tarskian truth values to the formulas of PA under the interpretation. We situate our investigation within a broad analysis of quantification vis a vis: * Hilbert's epsilon-calculus * Goedel's omega-consistency * The Law of the Excluded Middle * Hilbert's omega-Rule * An Algorithmic omega-Rule * Gentzen's Rule of Infinite Induction * Rosser's Rule C * Markov's Principle * The Church-Turing Thesis * Aristotle's particularisation * Wittgenstein's perspective of constructive mathematics * An evidence-based perspective of quantification. By showing how these are formally inter-related, we highlight the fragility of both the persisting, theistic, classical/Platonic interpretation of quantification grounded in Hilbert's epsilon-calculus; and the persisting, atheistic, constructive/Intuitionistic interpretation of quantification rooted in Brouwer's belief that the Law of the Excluded Middle is non-finitary. We then consider some consequences for mathematics, mathematics education, philosophy, and the natural sciences, of an agnostic, evidence-based, finitary interpretation of quantification that challenges classical paradigms in all these disciplines
Physical (A)Causality: Determinism, Randomness and Uncaused Events
Physical indeterminism; Randomness in physics; Physical random number generators; Physical chaos; Self-reflexive knowledge; Acausality in physics; Irreducible randomnes
Key agreement: security / division
Some key agreement schemes, such as Diffie--Hellman key agreement, reduce to Rabi--Sherman key agreement, in which Alice sends to Charlie, Charlie sends to Alice, they agree on key , where multiplicative notation here indicates some specialized associative binary operation.
All non-interactive key agreement schemes, where each peer independently determines a single delivery to the other, reduce to this case, because the ability to agree implies the existence of an associative operation. By extending the associative operation’s domain, the key agreement scheme can be enveloped into a mathematical ring, such that all cryptographic values are ring elements, and all key agreement computations are ring multiplications. (A smaller envelope, a semigroup instead of a ring, is also possible.)
Security relies on the difficulty of division: here, meaning an operator
such that . Security also relies on the difficulty of the less
familiar wedge operation .
When Rabi--Sherman key agreement is instantiated as Diffie--Hellman key agreement: its multiplication amounts to modular exponentiation; its division amounts to the discrete logarithm problem; the wedge operation amounts to the computational Diffie--Hellman problem.
Ring theory is well-developed and implies efficient division algorithms in some specific rings, such as matrix rings over fields. Semigroup theory, though less widely-known, also implies efficient division in specific semigroups, such as group-like semigroups.
The rarity of key agreement schemes with well-established security suggests that easy multiplication with difficult division (and wedges) is elusive.
Reduction of key agreement to ring or semigroup multiplication is not a panacea for cryptanalysis. Nonetheless, novel proposals for key agreement perhaps ought to run the gauntlet of a checklist for vulnerability to well-known division strategies that generalize across several forms of multiplication. Ambitiously applying this process of elimination to a plethora of diverse rings or semigroups might also, if only by a fluke, leave standing a few promising schemes, which might then deserve a more focused cryptanalysis