3,448 research outputs found
A Spatial-Epistemic Logic for Reasoning about Security Protocols
Reasoning about security properties involves reasoning about where the
information of a system is located, and how it evolves over time. While most
security analysis techniques need to cope with some notions of information
locality and knowledge propagation, usually they do not provide a general
language for expressing arbitrary properties involving local knowledge and
knowledge transfer. Building on this observation, we introduce a framework for
security protocol analysis based on dynamic spatial logic specifications. Our
computational model is a variant of existing pi-calculi, while specifications
are expressed in a dynamic spatial logic extended with an epistemic operator.
We present the syntax and semantics of the model and logic, and discuss the
expressiveness of the approach, showing it complete for passive attackers. We
also prove that generic Dolev-Yao attackers may be mechanically determined for
any deterministic finite protocol, and discuss how this result may be used to
reason about security properties of open systems. We also present a
model-checking algorithm for our logic, which has been implemented as an
extension to the SLMC system.Comment: In Proceedings SecCo 2010, arXiv:1102.516
Implicit complexity for coinductive data: a characterization of corecurrence
We propose a framework for reasoning about programs that manipulate
coinductive data as well as inductive data. Our approach is based on using
equational programs, which support a seamless combination of computation and
reasoning, and using productivity (fairness) as the fundamental assertion,
rather than bi-simulation. The latter is expressible in terms of the former. As
an application to this framework, we give an implicit characterization of
corecurrence: a function is definable using corecurrence iff its productivity
is provable using coinduction for formulas in which data-predicates do not
occur negatively. This is an analog, albeit in weaker form, of a
characterization of recurrence (i.e. primitive recursion) in [Leivant, Unipolar
induction, TCS 318, 2004].Comment: In Proceedings DICE 2011, arXiv:1201.034
Nominal C-Unification
Nominal unification is an extension of first-order unification that takes
into account the \alpha-equivalence relation generated by binding operators,
following the nominal approach. We propose a sound and complete procedure for
nominal unification with commutative operators, or nominal C-unification for
short, which has been formalised in Coq. The procedure transforms nominal
C-unification problems into simpler (finite families) of fixpoint problems,
whose solutions can be generated by algebraic techniques on combinatorics of
permutations.Comment: Pre-proceedings paper presented at the 27th International Symposium
on Logic-Based Program Synthesis and Transformation (LOPSTR 2017), Namur,
Belgium, 10-12 October 2017 (arXiv:1708.07854
UTP2: Higher-Order Equational Reasoning by Pointing
We describe a prototype theorem prover, UTP2, developed to match the style of
hand-written proof work in the Unifying Theories of Programming semantical
framework. This is based on alphabetised predicates in a 2nd-order logic, with
a strong emphasis on equational reasoning. We present here an overview of the
user-interface of this prover, which was developed from the outset using a
point-and-click approach. We contrast this with the command-line paradigm that
continues to dominate the mainstream theorem provers, and raises the question:
can we have the best of both worlds?Comment: In Proceedings UITP 2014, arXiv:1410.785
Canonized Rewriting and Ground AC Completion Modulo Shostak Theories : Design and Implementation
AC-completion efficiently handles equality modulo associative and commutative
function symbols. When the input is ground, the procedure terminates and
provides a decision algorithm for the word problem. In this paper, we present a
modular extension of ground AC-completion for deciding formulas in the
combination of the theory of equality with user-defined AC symbols,
uninterpreted symbols and an arbitrary signature disjoint Shostak theory X. Our
algorithm, called AC(X), is obtained by augmenting in a modular way ground
AC-completion with the canonizer and solver present for the theory X. This
integration rests on canonized rewriting, a new relation reminiscent to
normalized rewriting, which integrates canonizers in rewriting steps. AC(X) is
proved sound, complete and terminating, and is implemented to extend the core
of the Alt-Ergo theorem prover.Comment: 30 pages, full version of the paper TACAS'11 paper "Canonized
Rewriting and Ground AC-Completion Modulo Shostak Theories" accepted for
publication by LMCS (Logical Methods in Computer Science
Probability functions in the context of signed involutive meadows
The Kolmogorov axioms for probability functions are placed in the context of
signed meadows. A completeness theorem is stated and proven for the resulting
equational theory of probability calculus. Elementary definitions of
probability theory are restated in this framework.Comment: 20 pages, 6 tables, some minor errors are correcte
Superposition as a logical glue
The typical mathematical language systematically exploits notational and
logical abuses whose resolution requires not just the knowledge of domain
specific notation and conventions, but not trivial skills in the given
mathematical discipline. A large part of this background knowledge is expressed
in form of equalities and isomorphisms, allowing mathematicians to freely move
between different incarnations of the same entity without even mentioning the
transformation. Providing ITP-systems with similar capabilities seems to be a
major way to improve their intelligence, and to ease the communication between
the user and the machine. The present paper discusses our experience of
integration of a superposition calculus within the Matita interactive prover,
providing in particular a very flexible, "smart" application tactic, and a
simple, innovative approach to automation.Comment: In Proceedings TYPES 2009, arXiv:1103.311
Global semantic typing for inductive and coinductive computing
Inductive and coinductive types are commonly construed as ontological
(Church-style) types, denoting canonical data-sets such as natural numbers,
lists, and streams. For various purposes, notably the study of programs in the
context of global semantics, it is preferable to think of types as semantical
properties (Curry-style). Intrinsic theories were introduced in the late 1990s
to provide a purely logical framework for reasoning about programs and their
semantic types. We extend them here to data given by any combination of
inductive and coinductive definitions. This approach is of interest because it
fits tightly with syntactic, semantic, and proof theoretic fundamentals of
formal logic, with potential applications in implicit computational complexity
as well as extraction of programs from proofs. We prove a Canonicity Theorem,
showing that the global definition of program typing, via the usual (Tarskian)
semantics of first-order logic, agrees with their operational semantics in the
intended model. Finally, we show that every intrinsic theory is interpretable
in a conservative extension of first-order arithmetic. This means that
quantification over infinite data objects does not lead, on its own, to
proof-theoretic strength beyond that of Peano Arithmetic. Intrinsic theories
are perfectly amenable to formulas-as-types Curry-Howard morphisms, and were
used to characterize major computational complexity classes Their extensions
described here have similar potential which has already been applied
New results on rewrite-based satisfiability procedures
Program analysis and verification require decision procedures to reason on
theories of data structures. Many problems can be reduced to the satisfiability
of sets of ground literals in theory T. If a sound and complete inference
system for first-order logic is guaranteed to terminate on T-satisfiability
problems, any theorem-proving strategy with that system and a fair search plan
is a T-satisfiability procedure. We prove termination of a rewrite-based
first-order engine on the theories of records, integer offsets, integer offsets
modulo and lists. We give a modularity theorem stating sufficient conditions
for termination on a combinations of theories, given termination on each. The
above theories, as well as others, satisfy these conditions. We introduce
several sets of benchmarks on these theories and their combinations, including
both parametric synthetic benchmarks to test scalability, and real-world
problems to test performances on huge sets of literals. We compare the
rewrite-based theorem prover E with the validity checkers CVC and CVC Lite.
Contrary to the folklore that a general-purpose prover cannot compete with
reasoners with built-in theories, the experiments are overall favorable to the
theorem prover, showing that not only the rewriting approach is elegant and
conceptually simple, but has important practical implications.Comment: To appear in the ACM Transactions on Computational Logic, 49 page
State space c-reductions for concurrent systems in rewriting logic
We present c-reductions, a state space reduction technique.
The rough idea is to exploit some equivalence relation on states (possibly capturing system regularities) that preserves behavioral properties, and explore the induced quotient system. This is done by means of a canonizer
function, which maps each state into a (non necessarily unique) canonical representative of its equivalence class. The approach exploits the expressiveness of rewriting logic and its realization in Maude to enjoy several advantages over similar approaches: exibility and simplicity in
the definition of the reductions (supporting not only traditional symmetry reductions, but also name reuse and name abstraction); reasoning support for checking and proving correctness of the reductions; and automatization
of the reduction infrastructure via Maude's meta-programming
features. The approach has been validated over a set of representative case studies, exhibiting comparable results with respect to other tools
- …