338 research outputs found
On Verifying Complex Properties using Symbolic Shape Analysis
One of the main challenges in the verification of software systems is the
analysis of unbounded data structures with dynamic memory allocation, such as
linked data structures and arrays. We describe Bohne, a new analysis for
verifying data structures. Bohne verifies data structure operations and shows
that 1) the operations preserve data structure invariants and 2) the operations
satisfy their specifications expressed in terms of changes to the set of
objects stored in the data structure. During the analysis, Bohne infers loop
invariants in the form of disjunctions of universally quantified Boolean
combinations of formulas. To synthesize loop invariants of this form, Bohne
uses a combination of decision procedures for Monadic Second-Order Logic over
trees, SMT-LIB decision procedures (currently CVC Lite), and an automated
reasoner within the Isabelle interactive theorem prover. This architecture
shows that synthesized loop invariants can serve as a useful communication
mechanism between different decision procedures. Using Bohne, we have verified
operations on data structures such as linked lists with iterators and back
pointers, trees with and without parent pointers, two-level skip lists, array
data structures, and sorted lists. We have deployed Bohne in the Hob and Jahob
data structure analysis systems, enabling us to combine Bohne with analyses of
data structure clients and apply it in the context of larger programs. This
report describes the Bohne algorithm as well as techniques that Bohne uses to
reduce the ammount of annotations and the running time of the analysis
Toward a general logicist methodology for engineering ethically correct robots,”
Abstract It is hard to deny that robots will become increasingly capable, and that humans will increasingly exploit this capability by deploying them in ethically sensitive environments; i.e., in environments (e.g., hospitals) where ethically incorrect behavior on the part of robots could have dire effects on humans. But then how will we ensure that the robots in question always behave in an ethically correct manner? How can we know ahead of time, via rationales expressed in clear English (and/or other so-called natural languages), that they will so behave? How can we know in advance that their behavior will be constrained specifically by the ethical codes selected by human overseers? In general, it seems clear that one reply worth considering, put in encapsulated form, is this one: "By insisting that our robots only perform actions that can be proved ethically permissible in a human-selected deontic logic." (A deontic logic is simply a logic that formalizes an ethical code.) This approach ought to be explored for a number of reasons. One is that ethicists themselves work by rendering ethical theories and dilemmas in declarative form, and by reasoning over this declarative information using informal and/or formal logic. Other reasons in favor of pursuing the logicist solution are presented in the paper itself. To illustrate the feasibility of our methodology, we describe it in general terms free of any committment to particular systems, and show it solving a challenge regarding robot behavior in an intensive care unit
An Efficient Subsumption Test Pipeline for {BS(LRA)} Clauses
International audienceThe importance of subsumption testing for redundancy elimination in first-order logic automatic reasoning is well-known. Although the problem is already NP-complete for first-order clauses, the meanwhile developed test pipelines efficiently decide subsumption in almost all practical cases. We consider subsumption between first-oder clauses of the Bernays-Schönfinkel fragment over linear real arithmetic constraints: BS(LRA). The bottleneck in this setup is deciding implication between the LRA constraints of two clauses. Our new sample point heuristic preempts expensive implication decisions in about 94% of all cases in benchmarks. Combined with filtering techniques for the first-order BS part of clauses, it results again in an efficient subsumption test pipeline for BS(LRA) clauses
Adaptive Non-Linear Pattern Matching Automata
Efficient pattern matching is fundamental for practical term rewrite engines. By preprocessing the given patterns into a finite deterministic automaton the matching patterns can be decided in a single traversal of the relevant parts of the input term. Most automaton-based techniques are restricted to linear patterns, where each variable occurs at most once, and require an additional post-processing step to check so-called variable consistency. However, we can show that interleaving the variable consistency and pattern matching phases can reduce the number of required steps to find a match all matches. Therefore, we take the existing adaptive pattern matching automata as introduced by Sekar et al and extend it these with consistency checks. We prove that the resulting deterministic pattern matching automaton is correct, and show that its evaluation depth is can be shorter than two-phase approaches
On Verifying a File System Implementation
[No abstract available
Redbook: 1997
Advice compiled by Boston University School of Medicine students for incoming first year students and third or fourth year students preparing for clinical rotations
- …