61 research outputs found
Verification of Graph Programs
This thesis is concerned with verifying the correctness of programs written in GP 2 (for Graph Programs), an experimental, nondeterministic graph manipulation language, in which program states are graphs, and computational steps are applications of graph transformation rules. GP 2 allows for visual programming at a high level of abstraction, with the programmer freed from manipulating low-level data structures and instead solving graph-based problems in a direct, declarative, and rule-based way. To verify that a graph program meets some specification, however, has been -- prior to the work described in this thesis -- an ad hoc task, detracting from the appeal of using GP 2 to reason about graph algorithms, high-level system specifications, pointer structures, and the many other practical problems in software engineering and programming languages that can be modelled as graph problems. This thesis describes some contributions towards the challenge of verifying graph programs, in particular, Hoare logics with which correctness specifications can be proven in a syntax-directed and compositional manner.
We contribute calculi of proof rules for GP 2 that allow for rigorous reasoning about both partial correctness and termination of graph programs. These are given in an extensional style, i.e. independent of fixed assertion languages. This approach allows for the re-use of proof rules with different assertion languages for graphs, and moreover, allows for properties of the calculi to be inherited: soundness, completeness for termination, and relative completeness (for sufficiently expressive assertion languages).
We propose E-conditions as a graphical, intuitive assertion language for expressing properties of graphs -- both about their structure and labelling -- generalising the nested conditions of Habel, Pennemann, and Rensink. We instantiate our calculi with this language, explore the relationship between the decidability of the model checking problem and the existence of effective constructions for the extensional assertions, and fix a subclass of graph programs for which we have both. The calculi are then demonstrated by verifying a number of data- and structure-manipulating programs.
We explore the relationship between E-conditions and classical logic, defining translations between the former and a many-sorted predicate logic over graphs; the logic being a potential front end to an implementation of our work in a proof assistant.
Finally, we speculate on several avenues of interesting future work; in particular, a possible extension of E-conditions with transitive closure, for proving specifications involving properties about arbitrary-length paths
Incorrectness Logic for Graph Programs
Program logics typically reason about an over-approximation of program
behaviour to prove the absence of bugs. Recently, program logics have been
proposed that instead prove the presence of bugs by means of under-approximate
reasoning, which has the promise of better scalability. In this paper, we
present an under-approximate program logic for a nondeterministic graph
programming language, and show how it can be used to reason deductively about
program incorrectness, whether defined by the presence of forbidden graph
structure or by finitely failing executions. We prove this incorrectness logic
to be sound and complete, and speculate on some possible future applications of
it.Comment: Accepted by the 14th International Conference on Graph Transformation
(ICGT 2021
The AutoProof Verifier: Usability by Non-Experts and on Standard Code
Formal verification tools are often developed by experts for experts; as a
result, their usability by programmers with little formal methods experience
may be severely limited. In this paper, we discuss this general phenomenon with
reference to AutoProof: a tool that can verify the full functional correctness
of object-oriented software. In particular, we present our experiences of using
AutoProof in two contrasting contexts representative of non-expert usage.
First, we discuss its usability by students in a graduate course on software
verification, who were tasked with verifying implementations of various sorting
algorithms. Second, we evaluate its usability in verifying code developed for
programming assignments of an undergraduate course. The first scenario
represents usability by serious non-experts; the second represents usability on
"standard code", developed without full functional verification in mind. We
report our experiences and lessons learnt, from which we derive some general
suggestions for furthering the development of verification tools with respect
to improving their usability.Comment: In Proceedings F-IDE 2015, arXiv:1508.0338
Contract-Based General-Purpose GPU Programming
Using GPUs as general-purpose processors has revolutionized parallel
computing by offering, for a large and growing set of algorithms, massive
data-parallelization on desktop machines. An obstacle to widespread adoption,
however, is the difficulty of programming them and the low-level control of the
hardware required to achieve good performance. This paper suggests a
programming library, SafeGPU, that aims at striking a balance between
programmer productivity and performance, by making GPU data-parallel operations
accessible from within a classical object-oriented programming language. The
solution is integrated with the design-by-contract approach, which increases
confidence in functional program correctness by embedding executable program
specifications into the program text. We show that our library leads to modular
and maintainable code that is accessible to GPGPU non-experts, while providing
performance that is comparable with hand-written CUDA code. Furthermore,
runtime contract checking turns out to be feasible, as the contracts can be
executed on the GPU
Towards Practical Graph-Based Verification for an Object-Oriented Concurrency Model
To harness the power of multi-core and distributed platforms, and to make the
development of concurrent software more accessible to software engineers,
different object-oriented concurrency models such as SCOOP have been proposed.
Despite the practical importance of analysing SCOOP programs, there are
currently no general verification approaches that operate directly on program
code without additional annotations. One reason for this is the multitude of
partially conflicting semantic formalisations for SCOOP (either in theory or
by-implementation). Here, we propose a simple graph transformation system (GTS)
based run-time semantics for SCOOP that grasps the most common features of all
known semantics of the language. This run-time model is implemented in the
state-of-the-art GTS tool GROOVE, which allows us to simulate, analyse, and
verify a subset of SCOOP programs with respect to deadlocks and other
behavioural properties. Besides proposing the first approach to verify SCOOP
programs by automatic translation to GTS, we also highlight our experiences of
applying GTS (and especially GROOVE) for specifying semantics in the form of a
run-time model, which should be transferable to GTS models for other concurrent
languages and libraries.Comment: In Proceedings GaM 2015, arXiv:1504.0244
Verifying total correctness of graph programs
GP 2 is an experimental nondeterministic programming language based on graph transformation rules, allowing for visual programming and the solving of graph problems at a high-level of abstraction. In previous work we demonstrated how to verify graph programs using a Hoare-style proof calculus, but only partial correctness was considered. In this paper, we add new proof rules and termination functions, which allow for proofs to additionally guarantee that program executions always terminate (weak total correctness), or that programs always terminate and do so without failure (total correctness). We show that the new proof rules are sound with respect to the operational semantics of GP 2, complete for termination, and demonstrate their use on some example programs
Learning from mutants: Using code mutation to learn and monitor invariants of a cyber-physical system
Cyber-physical systems (CPS) consist of sensors, actuators, and controllers
all communicating over a network; if any subset becomes compromised, an
attacker could cause significant damage. With access to data logs and a model
of the CPS, the physical effects of an attack could potentially be detected
before any damage is done. Manually building a model that is accurate enough in
practice, however, is extremely difficult. In this paper, we propose a novel
approach for constructing models of CPS automatically, by applying supervised
machine learning to data traces obtained after systematically seeding their
software components with faults ("mutants"). We demonstrate the efficacy of
this approach on the simulator of a real-world water purification plant,
presenting a framework that automatically generates mutants, collects data
traces, and learns an SVM-based model. Using cross-validation and statistical
model checking, we show that the learnt model characterises an invariant
physical property of the system. Furthermore, we demonstrate the usefulness of
the invariant by subjecting the system to 55 network and code-modification
attacks, and showing that it can detect 85% of them from the data logs
generated at runtime.Comment: Accepted by IEEE S&P 201
Code Integrity Attestation for PLCs using Black Box Neural Network Predictions
Cyber-physical systems (CPSs) are widespread in critical domains, and
significant damage can be caused if an attacker is able to modify the code of
their programmable logic controllers (PLCs). Unfortunately, traditional
techniques for attesting code integrity (i.e. verifying that it has not been
modified) rely on firmware access or roots-of-trust, neither of which
proprietary or legacy PLCs are likely to provide. In this paper, we propose a
practical code integrity checking solution based on privacy-preserving black
box models that instead attest the input/output behaviour of PLC programs.
Using faithful offline copies of the PLC programs, we identify their most
important inputs through an information flow analysis, execute them on multiple
combinations to collect data, then train neural networks able to predict PLC
outputs (i.e. actuator commands) from their inputs. By exploiting the black box
nature of the model, our solution maintains the privacy of the original PLC
code and does not assume that attackers are unaware of its presence. The trust
instead comes from the fact that it is extremely hard to attack the PLC code
and neural networks at the same time and with consistent outcomes. We evaluated
our approach on a modern six-stage water treatment plant testbed, finding that
it could predict actuator states from PLC inputs with near-100% accuracy, and
thus could detect all 120 effective code mutations that we subjected the PLCs
to. Finally, we found that it is not practically possible to simultaneously
modify the PLC code and apply discreet adversarial noise to our attesters in a
way that leads to consistent (mis-)predictions.Comment: Accepted by the 29th ACM Joint European Software Engineering
Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE
2021
How Generalizable are Deepfake Detectors? An Empirical Study
Deepfake videos and images are becoming increasingly credible, posing a
significant threat given their potential to facilitate fraud or bypass access
control systems. This has motivated the development of deepfake detection
methods, in which deep learning models are trained to distinguish between real
and synthesized footage. Unfortunately, existing detection models struggle to
generalize to deepfakes from datasets they were not trained on, but little work
has been done to examine why or how this limitation can be addressed. In this
paper, we present the first empirical study on the generalizability of deepfake
detectors, an essential goal for detectors to stay one step ahead of attackers.
Our study utilizes six deepfake datasets, five deepfake detection methods, and
two model augmentation approaches, confirming that detectors do not generalize
in zero-shot settings. Additionally, we find that detectors are learning
unwanted properties specific to synthesis methods and struggling to extract
discriminative features, limiting their ability to generalize. Finally, we find
that there are neurons universally contributing to detection across seen and
unseen datasets, illuminating a possible path forward to zero-shot
generalizability.Comment: This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessibl
- …