89 research outputs found
Deductive formal verification of embedded systems
We combine static analysis techniques with model-based deductive verification using SMT solvers to provide a framework that, given an analysis aspect of the source code, automatically generates an analyzer capable of inferring information about that aspect.
The analyzer is generated by translating the collecting semantics of a program to a formula in first order logic over multiple underlying theories. We import the semantics of the API invocations as first order logic assertions. These assertions constitute the models used by the analyzer. Logical specification of the desired program behavior is incorporated as a first order logic formula. An SMT-LIB solver treats the combined formula as a constraint and solves it. The solved form can be used to identify logical and security errors in embedded programs. We have used this framework to analyze Android applications and MATLAB code.
We also report the formal verification of the conformance of the open source Netgear WNR3500L wireless router firmware implementation to the RFC 2131. Formal verification of a software system is essential for its deployment in mission-critical environments. The specifications for the development of routers are provided by RFCs that are only described informally in English. It is prudential to ensure that a router firmware conforms to its corresponding RFC before it can be deployed for managing mission-critical networks. The formal verification process demonstrates the usefulness of inductive types and higher-order logic in software certification
Unifying Static And Runtime Analysis In Declarative Distributed Systems
Today’s distributed systems are becoming increasingly complex, due to the ever-growing number of network devices and their variety. The complexity makes it hard for system administrators to correctly configure distributed systems. This motivates the need for effective analytic tools that can help ensure correctness of distributed systems.
One challenge in ensuring correctness is that there does not exist one solution that works for all properties. One type of properties, such as security properties, are so critical that they demand pre-deployment verification (i.e., static analysis) which, though time-consuming, explores the whole execution space. However, due to the potential problem of state explosion, static verification of all properties is not practical, and not necessary. Violation of non-critical properties, such as correct routing with shortest paths, is tolerable during execution and can be diagnosed after errors occur (i.e., runtime analysis), a more light-weight approach compared to verification.
This dissertation presents STRANDS, a declarative framework that enables users to perform both pre-deployment verification and post-deployment diagnostics on top of declarative specification of distributed systems. STRANDS uses Network Datalog (NDlog), a distributed variant of Datalog query language, to specify network protocols and services. STRANDS has two components: a system verifier and a system debugger. The verifier allows the user to rigorously prove safety properties of network protocols and services, using either the program logic or symbolic execution we develop for NDlog programs. The debugger, on the other hand, facilitates diagnosis of system errors by allowing for querying of the structured history of network execution (i.e., network provenance) that is maintained in a storage-efficient manner.
We show the effectiveness of STRANDS by evaluating both the verifier and the debugger. Using the verifier, we prove path authenticity of secure routing protocols, and verify a number of safety properties in software-defined networking (SDN). Also, we demonstrate that our provenance maintenance algorithm achieves significant storage reduction, while incurring negligible network overhead
An approach to enacting business process models in support of the life cycle of integrated manufacturing systems
The complexity of enterprise engineering processes requires the application of
reference architectures as means of guiding the achievement of an adequate level of
business integration. This research aims to address important aspects of this
requirement by associating the formalism of reference architectures to various life cycle
phases of integrating manufacturing systems (IMS) and enabling their use in addressing
contemporary system engineering issues.
In pursuit of this aim, the following research activities were carried out: (1) to
devise a framework which supports key phases of the IMS life cycle and (2) to populate
part of this framework with an initial combination of architectures which can be
encapsulated into a computer-aided systems engineering environment. This has led to
the creation of a workbench capable of providing support for modelling, analysis,
simulation, rapid-prototyping, configuration and run-time operation of an IMS, based
on a consistent set of models associated with the engineering processes involved. The
research effort concentrated on selecting and investigating the use of appropriate
formalisms which underpin a selection of architectures and tools (i. e. CIM-OSA, Petrinets,
object-oriented methods and CIM-BIOSYS), this by designing, implementing,
applying and testing the workbench.
The main contribution of this research is to demonstrate that it is possible to
retain an adequate level of formalism, via computational structures and models, which
extend through the IMS life cycle from a conceptual description of the system through
to actions that the system performs when operating. The underlying methodology
which supported this contribution is based on enacting models of system behaviour
which encode important coordination aspects of manufacturing systems. The strategy
for demonstrating the incorporation of formalism to the IMS life cycle was to enable
the aggregation into a workbench of knowledge of 'what' the system is expected to
achieve (i. e. 'problems' to be addressed) and 'how' the system can achieve it (i. e
possible 'solutions'). Within the workbench, such a knowledge is represented through
an amalgamation of business process modelling and object-oriented modelling
approaches which, when adequately manipulated, can lead to business integration
Modelling a Distributed Data Acquisition System
This thesis discusses the formal modelling and verification of certain non-real-time aspects of
correctness of a mission-critical distributed software system known as the ALICE Data Point
Service (ADAPOS). The domain of this distributed system is data acquisition from a particle
detector control system in experimental high energy particle physics research. ADAPOS is
part of the upgrade effort of A Large Ion Collider Experiment (ALICE) at the European
Organisation for Nuclear Research (CERN), near Geneva in France/Switzerland, for the third
run of the Large Hadron Collider (LHC). ADAPOS is based on the publicly available ALICE
Data Point Processing (ADAPRO) C++14 framework and works within the free and open
source GNU/Linux ecosystem.
The model checker Spin was chosen for modelling and verifying ADAPOS. The model
focuses on the general specification of ADAPOS. It includes ADAPOS processes, a load
generator process, and rudimentary interpretations for the network protocols used between
the processes. For experimenting with different interpretations of the underlying network
protocols and also for coping with the state space explosion problem, eight variants of the
model were developed and studied. Nine Linear Temporal Logic (LTL) properties were defined
for all those variants.
Large numbers of states were covered during model checking even though the model
turned out to have a reachable state space too large to fully exhaust. No counter-examples
were found to safety properties. A significant amount of evidence hinting that ADAPOS
seems to be safe, was obtained. Liveness properties and implementation-level verification
among other possible research directions remain open
First-Order Models for Configuration Analysis
Our world teems with networked devices. Their configuration exerts an ever-expanding influence on our daily lives. Yet correctly configuring systems, networks, and access-control policies is notoriously difficult, even for trained professionals. Automated static analysis techniques provide a way to both verify a configuration\u27s correctness and explore its implications. One such approach is scenario-finding: showing concrete scenarios that illustrate potential (mis-)behavior. Scenarios even have a benefit to users without technical expertise, as concrete examples can both trigger and improve users\u27 intuition about their system. This thesis describes a concerted research effort toward improving scenario-finding tools for configuration analysis. We developed Margrave, a scenario-finding tool with special features designed for security policies and configurations. Margrave is not tied to any one specific policy language; rather, it provides an intermediate input language as expressive as first-order logic. This flexibility allows Margrave to reason about many different types of policy. We show Margrave in action on Cisco IOS, a common language for configuring firewalls, demonstrating that scenario-finding with Margrave is useful for debugging and validating real-world configurations. This thesis also presents a theorem showing that, for a restricted subclass of first-order logic, if a sentence is satisfiable then there must exist a satisfying scenario no larger than a computable bound. For such sentences scenario-finding is complete: one can be certain that no scenarios are missed by the analysis, provided that one checks up to the computed bound. We demonstrate that many common configurations fall into this subclass and give algorithmic tests for both sentence membership and counting. We have implemented both in Margrave. Aluminum is a tool that eliminates superfluous information in scenarios and allows users\u27 goals to guide which scenarios are displayed. We quantitatively show that our methods of scenario-reduction and exploration are effective and quite efficient in practice. Our work on Aluminum is making its way into other scenario-finding tools. Finally, we describe FlowLog, a language for network programming that we created with analysis in mind. We show that FlowLog can express many common network programs, yet demonstrate that automated analysis and bug-finding for FlowLog are both feasible as well as complete
Advanced information processing system: The Army fault tolerant architecture conceptual study. Volume 2: Army fault tolerant architecture design and analysis
Described here is the Army Fault Tolerant Architecture (AFTA) hardware architecture and components and the operating system. The architectural and operational theory of the AFTA Fault Tolerant Data Bus is discussed. The test and maintenance strategy developed for use in fielded AFTA installations is presented. An approach to be used in reducing the probability of AFTA failure due to common mode faults is described. Analytical models for AFTA performance, reliability, availability, life cycle cost, weight, power, and volume are developed. An approach is presented for using VHSIC Hardware Description Language (VHDL) to describe and design AFTA's developmental hardware. A plan is described for verifying and validating key AFTA concepts during the Dem/Val phase. Analytical models and partial mission requirements are used to generate AFTA configurations for the TF/TA/NOE and Ground Vehicle missions
Doctor of Philosophy
dissertationIn computer science, functional software testing is a method of ensuring that software gives expected output on specific inputs. Software testing is conducted to ensure desired levels of quality in light of uncertainty resulting from the complexity of software. Most of today's software is written by people and software development is a creative activity. However, due to the complexity of computer systems and software development processes, this activity leads to a mismatch between the expected software functionality and the implemented one. If not addressed in a timely and proper manner, this mismatch can cause serious consequences to users of the software, such as security and privacy breaches, financial loss, and adversarial human health issues. Because of manual effort, software testing is costly. Software testing that is performed without human intervention is automatic software testing and it is one way of addressing the issue. In this work, we build upon and extend several techniques for automatic software testing. The techniques do not require any guidance from the user. Goals that are achieved with the techniques are checking for yet unknown errors, automatically testing object-oriented software, and detecting malicious software. To meet these goals, we explored several techniques and related challenges: automatic test case generation, runtime verification, dynamic symbolic execution, and the type and size of test inputs for efficient detection of malicious software via machine learning. Our work targets software written in the Java programming language, though the techniques are general and applicable to other languages. We performed an extensive evaluation on freely available Java software projects, a flight collision avoidance system, and thousands of applications for the Android operating system. Evaluation results show to what extent dynamic symbolic execution is applicable in testing object-oriented software, they show correctness of the flight system on millions of automatically customized and generated test cases, and they show that simple and relatively small inputs in random testing can lead to effective malicious software detection
CryptoBap: A Binary Analysis Platform for Cryptographic Protocols
We introduce CryptoBap, a platform to verify weak secrecy and authentication
for the (ARMv8 and RISC-V) machine code of cryptographic protocols. We achieve
this by first transpiling the binary of protocols into an intermediate
representation and then performing a crypto-aware symbolic execution to
automatically extract a model of the protocol that represents all its execution
paths. Our symbolic execution resolves indirect jumps and supports bounded
loops using the loop-summarization technique, which we fully automate. The
extracted model is then translated into models amenable to automated
verification via ProVerif and CryptoVerif using a third-party toolchain. We
prove the soundness of the proposed approach and used CryptoBap to verify
multiple case studies ranging from toy examples to real-world protocols,
TinySSH, an implementation of SSH, and WireGuard, a modern VPN protocol
- …