68 research outputs found

    How to Win First-Order Safety Games

    Get PDF
    First-order (FO) transition systems have recently attracted attention for the verification of parametric systems such as network protocols, software-defined networks or multi-agent workflows like conference management systems. Functional correctness or noninterference of these systems have conveniently been formulated as safety or hypersafety properties, respectively. In this article, we take the step from verification to synthesis---tackling the question whether it is possible to automatically synthesize predicates to enforce safety or hypersafety properties like noninterference. For that, we generalize FO transition systems to FO safety games. For FO games with monadic predicates only, we provide a complete classification into decidable and undecidable cases. For games with non-monadic predicates, we concentrate on universal first-order invariants, since these are sufficient to express a large class of properties---for example noninterference. We identify a non-trivial sub-class where invariants can be proven inductive and FO winning strategies be effectively constructed. We also show how the extraction of weakest FO winning strategies can be reduced to SO quantifier elimination itself. We demonstrate the usefulness of our approach by automatically synthesizing nontrivial FO specifications of messages in a leader election protocol as well as for paper assignment in a conference management system to exclude unappreciated disclosure of reports

    Composition and Declassification in Possibilistic Information Flow Security

    Get PDF
    Formal methods for security can rule out whole classes of security vulnerabilities, but applying them in practice remains challenging. This thesis develops formal verification techniques for information flow security that combine the expressivity and scalability strengths of existing frameworks. It builds upon Bounded Deducibility (BD) Security, which allows specifying and verifying fine-grained policies about what information may flow when to whom. Our main technical result is a compositionality theorem for BD Security, providing scalability by allowing us to verify security properties of a large system by verifying smaller components. Its practical utility is illustrated by a case study of verifying confidentiality properties of a distributed social media platform. Moreover, we discuss its use for the modular development of secure workflow systems, and for the security-preserving enforcement of safety and security properties other than information flow control

    Seventh Biennial Report : June 2003 - March 2005

    No full text

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 10980 and 10981 constitutes the refereed proceedings of the 30th International Conference on Computer Aided Verification, CAV 2018, held in Oxford, UK, in July 2018. The 52 full and 13 tool papers presented together with 3 invited papers and 2 tutorials were carefully reviewed and selected from 215 submissions. The papers cover a wide range of topics and techniques, from algorithmic and logical foundations of verification to practical applications in distributed, networked, cyber-physical, and autonomous systems. They are organized in topical sections on model checking, program analysis using polyhedra, synthesis, learning, runtime verification, hybrid and timed systems, tools, probabilistic systems, static analysis, theory and security, SAT, SMT and decisions procedures, concurrency, and CPS, hardware, industrial applications

    Tools and techniques for analysing the impact of information security

    Get PDF
    PhD ThesisThe discipline of information security is employed by organisations to protect the confidentiality, integrity and availability of information, often communicated in the form of information security policies. A policy expresses rules, constraints and procedures to guard against adversarial threats and reduce risk by instigating desired and secure behaviour of those people interacting with information legitimately. To keep aligned with a dynamic threat landscape, evolving business requirements, regulation updates, and new technologies a policy must undergo periodic review and change. Chief Information Security Officers (CISOs) are the main decision makers on information security policies within an organisation. Making informed policy modifications involves analysing and therefore predicting the impact of those changes on the success rate of business processes often expressed as workflows. Security brings an added burden to completing a workflow. Adding a new security constraint may reduce success rate or even eliminate it if a workflow is always forced to terminate early. This can increase the chances of employees bypassing or violating a security policy. Removing an existing security constraint may increase success rate but may may also increase the risk to security. A lack of suitably aimed impact analysis tools and methodologies for CISOs means impact analysis is currently a somewhat manual and ambiguous procedure. Analysis can be overwhelming, time consuming, error prone, and yield unclear results, especially when workflows are complex, have a large workforce, and diverse security requirements. This thesis considers the provision of tools and more formal techniques specific to CISOs to help them analyse the impact modifying a security policy has on the success rate of a workflow. More precisely, these tools and techniques have been designed to efficiently compare the impact between two versions of a security policy applied to the same workflow, one before, the other after a policy modification. This work focuses on two specific types of security impact analysis. The first is quantitative in nature, providing a measure of success rate for a security constrained workflow which must be executed by employees who may be absent at runtime. This work considers quantifying workflow resiliency which indicates a workflow’s expected success rate assuming the availability of employees to be probabilistic. New aspects of quantitative resiliency are introduced in the form of workflow metrics, and risk management techniques to manage workflows that must work with a resiliency below acceptable levels. Defining these risk management techniques has led to exploring the reduction of resiliency computation time and analysing resiliency in workflows with choice. The second area of focus is more qualitative, in terms of facilitating analysis of how people are likely to behave in response to security and how that behaviour can impact the success rate of a workflow at a task level. Large amounts of information from disparate sources exists on human behavioural factors in a security setting which can be aligned with security standards and structured within a single ontology to form a knowledge base. Consultations with two CISOs have been conducted, whose responses have driven the implementation of two new tools, one graphical, the other Web-oriented allowing CISOs and human factors experts to record and incorporate their knowledge directly within an ontology. The ontology can be used by CISOs to assess the potential impact of changes made to a security policy and help devise behavioural controls to manage that impact. The two consulted CISOs have also carried out an evaluation of the Web-oriented tool. vii

    Resilience-Building Technologies: State of Knowledge -- ReSIST NoE Deliverable D12

    Get PDF
    This document is the first product of work package WP2, "Resilience-building and -scaling technologies", in the programme of jointly executed research (JER) of the ReSIST Network of Excellenc

    MDD4SOA: Model-Driven Development for Service-Oriented Architectures

    Get PDF
    • …
    corecore