108 research outputs found
Shoggoth: A Formal Foundation for Strategic Rewriting
Rewriting is a versatile and powerful technique used in many domains. Strategic rewriting allows programmers to control the application of rewrite rules by composing individual rewrite rules into complex rewrite strategies. These strategies are semantically complex, as they may be nondeterministic, they may raise errors that trigger backtracking, and they may not terminate.Given such semantic complexity, it is necessary to establish a formal understanding of rewrite strategies and to enable reasoning about them in order to answer questions like: How do we know that a rewrite strategy terminates? How do we know that a rewrite strategy does not fail because we compose two incompatible rewrites? How do we know that a desired property holds after applying a rewrite strategy?In this paper, we introduce Shoggoth: a formal foundation for understanding, analysing and reasoning about strategic rewriting that is capable of answering these questions. We provide a denotational semantics of System S, a core language for strategic rewriting, and prove its equivalence to our big-step operational semantics, which extends existing work by explicitly accounting for divergence. We further define a location-based weakest precondition calculus to enable formal reasoning about rewriting strategies, and we prove this calculus sound with respect to the denotational semantics. We show how this calculus can be used in practice to reason about properties of rewriting strategies, including termination, that they are well-composed, and that desired postconditions hold. The semantics and calculus are formalised in Isabelle/HOL and all proofs are mechanised
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
Current and Future Challenges in Knowledge Representation and Reasoning
Knowledge Representation and Reasoning is a central, longstanding, and active
area of Artificial Intelligence. Over the years it has evolved significantly;
more recently it has been challenged and complemented by research in areas such
as machine learning and reasoning under uncertainty. In July 2022 a Dagstuhl
Perspectives workshop was held on Knowledge Representation and Reasoning. The
goal of the workshop was to describe the state of the art in the field,
including its relation with other areas, its shortcomings and strengths,
together with recommendations for future progress. We developed this manifesto
based on the presentations, panels, working groups, and discussions that took
place at the Dagstuhl Workshop. It is a declaration of our views on Knowledge
Representation: its origins, goals, milestones, and current foci; its relation
to other disciplines, especially to Artificial Intelligence; and on its
challenges, along with key priorities for the next decade
Planning problems as types, plans as programs : a dependent types infrastructure for verification and reasoning about automated plans in Agda
Historically, the Artificial Intelligence and programming language fields have had
a mutually beneficial relationship. Typically, theoretical results in the programming
language field have practical utility in the Artificial Intelligence field. One example
of this that has roots in both declarative languages and theorem proving is AI planning. In recent years, new programming languages have been developed that are
founded on dependent type theory. These languages are not only more expressive
than traditional programming languages but also have the ability to represent and
prove mathematical properties within the language. This thesis will explore how dependently typed languages can benefit the AI planning field. On one side this thesis
will show how AI planning languages can be enriched with more expressivity and
stronger verification guarantees. On the other, it will show that AI planning is an
ideal field to illustrate the practical utility of largely theoretical aspects of programming language theory. This thesis will accomplish this by implementing multiple
inference systems for plan validation in the dependently-typed programming language Agda. Importantly, these inference systems will be automated, and embody
the Curry-Howard correspondence where plans will not only be proof-terms but
also executable functions. This thesis will then show how the dependently-typed
implementations of the inference systems can be further utilised to add enriched
constraints over plan validation
On Complexity Bounds and Confluence of Parallel Term Rewriting
We revisit parallel-innermost term rewriting as a model of parallel
computation on inductive data structures and provide a corresponding notion of
runtime complexity parametric in the size of the start term. We propose
automatic techniques to derive both upper and lower bounds on parallel
complexity of rewriting that enable a direct reuse of existing techniques for
sequential complexity. Our approach to find lower bounds requires confluence of
the parallel-innermost rewrite relation, thus we also provide effective
sufficient criteria for proving confluence. The applicability and the precision
of the method are demonstrated by the relatively light effort in extending the
program analysis tool AProVE and by experiments on numerous benchmarks from the
literature.Comment: Under submission to Fundamenta Informaticae. arXiv admin note:
substantial text overlap with arXiv:2208.0100
Formalizing Functions as Processes
We present the first formalization of Milner’s classic translation of the λ-calculus into the π-calculus. It is a challenging result with respect to variables, names, and binders, as it requires one to relate variables and binders of the λ-calculus with names and binders in the π-calculus. We formalize it in Abella, merging the set of variables and the set of names, thus circumventing the challenge and obtaining a neat formalization. About the translation, we follow Accattoli’s factoring of Milner’s result via the linear substitution calculus, which is a λ-calculus with explicit substitutions and contextual rewriting rules, mediating between the λ-calculus and the π-calculus. Another aim of the formalization is to investigate to which extent the use of contexts in Accattoli’s refinement can be formalized
No Unification Variable Left Behind: Fully Grounding Type Inference for the HDM System
The Hindley-Damas-Milner (HDM) system provides polymorphism, a key feature of functional programming languages such as Haskell and OCaml. It does so through a type inference algorithm, whose soundness and completeness have been well-studied and proven both manually (on paper) and mechanically (in a proof assistant). Earlier research has focused on the problem of inferring the type of a top-level expression. Yet, in practice, we also may wish to infer the type of subexpressions, either for the sake of elaboration into an explicitly-typed target language, or for reporting those types back to the programmer. One key difference between these two problems is the treatment of underconstrained types: in the former, unification variables that do not affect the overall type need not be instantiated. However, in the latter, instantiating all unification variables is essential, because unification variables are internal to the algorithm and should not leak into the output.
We present an algorithm for the HDM system that explicitly tracks the scope of all unification variables. In addition to solving the subexpression type reconstruction problem described above, it can be used as a basis for elaboration algorithms, including those that implement elaboration-based features such as type classes. The algorithm implements input and output contexts, as well as the novel concept of full contexts, which significantly simplifies the state-passing of traditional algorithms. The algorithm has been formalised and proven sound and complete using the Coq proof assistant
Tools and Algorithms for the Construction and Analysis of Systems
This open access book constitutes the proceedings of the 28th International Conference on Tools and Algorithms for the Construction and Analysis of Systems, TACAS 2022, which was held during April 2-7, 2022, in Munich, Germany, as part of the European Joint Conferences on Theory and Practice of Software, ETAPS 2022. The 46 full papers and 4 short papers presented in this volume were carefully reviewed and selected from 159 submissions. The proceedings also contain 16 tool papers of the affiliated competition SV-Comp and 1 paper consisting of the competition report. TACAS is a forum for researchers, developers, and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, exibility, and efficiency of tools and algorithms for building computer-controlled systems
Assuming Data Integrity and Empirical Evidence to The Contrary
Background: Not all respondents to surveys apply their minds or understand
the posed questions, and as such provide answers which lack coherence, and
this threatens the integrity of the research. Casual inspection and limited
research of the 10-item Big Five Inventory (BFI-10), included in the dataset of
the World Values Survey (WVS), suggested that random responses may be
common.
Objective: To specify the percentage of cases in the BRI-10 which include
incoherent or contradictory responses and to test the extent to which the
removal of these cases will improve the quality of the dataset.
Method: The WVS data on the BFI-10, measuring the Big Five Personality (B5P), in South Africa (N=3 531), was used. Incoherent or contradictory responses were removed. Then the cases from the cleaned-up dataset were analysed for their theoretical validity.
Results: Only 1 612 (45.7%) cases were identified as not including incoherent
or contradictory responses. The cleaned-up data did not mirror the B5P- structure, as was envisaged. The test for common method bias was negative. Conclusion: In most cases the responses were incoherent. Cleaning up the data did not improve the psychometric properties of the BFI-10. This raises concerns about the quality of the WVS data, the BFI-10, and the universality of B5P-theory. Given these results, it would be unwise to use the BFI-10 in South Africa. Researchers are alerted to do a proper assessment of the
psychometric properties of instruments before they use it, particularly in a
cross-cultural setting
Analysing Parallel Complexity of Term Rewriting
We revisit parallel-innermost term rewriting as a model of parallel
computation on inductive data structures and provide a corresponding notion of
runtime complexity parametric in the size of the start term. We propose
automatic techniques to derive both upper and lower bounds on parallel
complexity of rewriting that enable a direct reuse of existing techniques for
sequential complexity. The applicability and the precision of the method are
demonstrated by the relatively light effort in extending the program analysis
tool AProVE and by experiments on numerous benchmarks from the literature.Comment: Extended authors' accepted manuscript for a paper accepted for
publication in the Proceedings of the 32nd International Symposium on
Logic-based Program Synthesis and Transformation (LOPSTR 2022). 27 page
- …