103 research outputs found
The AutoProof Verifier: Usability by Non-Experts and on Standard Code
Formal verification tools are often developed by experts for experts; as a
result, their usability by programmers with little formal methods experience
may be severely limited. In this paper, we discuss this general phenomenon with
reference to AutoProof: a tool that can verify the full functional correctness
of object-oriented software. In particular, we present our experiences of using
AutoProof in two contrasting contexts representative of non-expert usage.
First, we discuss its usability by students in a graduate course on software
verification, who were tasked with verifying implementations of various sorting
algorithms. Second, we evaluate its usability in verifying code developed for
programming assignments of an undergraduate course. The first scenario
represents usability by serious non-experts; the second represents usability on
"standard code", developed without full functional verification in mind. We
report our experiences and lessons learnt, from which we derive some general
suggestions for furthering the development of verification tools with respect
to improving their usability.Comment: In Proceedings F-IDE 2015, arXiv:1508.0338
Formal Reasoning Using an Iterative Approach with an Integrated Web IDE
This paper summarizes our experience in communicating the elements of
reasoning about correctness, and the central role of formal specifications in
reasoning about modular, component-based software using a language and an
integrated Web IDE designed for the purpose. Our experience in using such an
IDE, supported by a 'push-button' verifying compiler in a classroom setting,
reveals the highly iterative process learners use to arrive at suitably
specified, automatically provable code. We explain how the IDE facilitates
reasoning at each step of this process by providing human readable verification
conditions (VCs) and feedback from an integrated prover that clearly indicates
unprovable VCs to help identify obstacles to completing proofs. The paper
discusses the IDE's usage in verified software development using several
examples drawn from actual classroom lectures and student assignments to
illustrate principles of design-by-contract and the iterative process of
creating and subsequently refining assertions, such as loop invariants in
object-based code.Comment: In Proceedings F-IDE 2015, arXiv:1508.0338
Specifying Reusable Components
Reusable software components need expressive specifications. This paper
outlines a rigorous foundation to model-based contracts, a method to equip
classes with strong contracts that support accurate design, implementation, and
formal verification of reusable components. Model-based contracts
conservatively extend the classic Design by Contract with a notion of model,
which underpins the precise definitions of such concepts as abstract
equivalence and specification completeness. Experiments applying model-based
contracts to libraries of data structures suggest that the method enables
accurate specification of practical software
Flexible Invariants Through Semantic Collaboration
Modular reasoning about class invariants is challenging in the presence of
dependencies among collaborating objects that need to maintain global
consistency. This paper presents semantic collaboration: a novel methodology to
specify and reason about class invariants of sequential object-oriented
programs, which models dependencies between collaborating objects by semantic
means. Combined with a simple ownership mechanism and useful default schemes,
semantic collaboration achieves the flexibility necessary to reason about
complicated inter-object dependencies but requires limited annotation burden
when applied to standard specification patterns. The methodology is implemented
in AutoProof, our program verifier for the Eiffel programming language (but it
is applicable to any language supporting some form of representation
invariants). An evaluation on several challenge problems proposed in the
literature demonstrates that it can handle a variety of idiomatic collaboration
patterns, and is more widely applicable than the existing invariant
methodologies.Comment: 22 page
A Failed Proof Can Yield a Useful Test
A successful automated program proof is, in software verification, the
ultimate triumph. In practice, however, the road to such success is paved with
many failed proof attempts. Unlike a failed test, which provides concrete
evidence of an actual bug in the program, a failed proof leaves the programmer
in the dark. Can we instead learn something useful from it?
The work reported here takes advantage of the rich internal information that
some automatic provers collect about the program when attempting a proof. If
the proof fails, the Proof2Test tool presented in this article uses the
counterexample generated by the prover (specifically, the SMT solver underlying
the proof environment Boogie, used in the AutoProof system to perform
correctness proofs of contract-equipped Eiffel programs) to produce a failed
test, which provides the programmer with immediately exploitable information to
correct the program. The discussion presents the Proof2Test tool and
demonstrates the application of the ideas and tool to a collection of
representative examples
Inferring Loop Invariants using Postconditions
One of the obstacles in automatic program proving is to obtain suitable loop
invariants.
The invariant of a loop is a weakened form of its postcondition (the loop's
goal, also known as its contract); the present work takes advantage of this
observation by using the postcondition as the basis for invariant inference,
using various heuristics such as "uncoupling" which prove useful in many
important algorithms.
Thanks to these heuristics, the technique is able to infer invariants for a
large variety of loop examples.
We present the theory behind the technique, its implementation (freely
available for download and currently relying on Microsoft Research's Boogie
tool), and the results obtained.Comment: Slightly revised versio
- …