6 research outputs found

    Security properties of self-similar uniformly parameterised systems of cooperations

    Get PDF
    Abstract-Uniform parameterisations of cooperations are defined in terms of formal language theory, such that each pair of partners cooperates in the same manner, and that the mechanism (schedule) to determine how one partner may be involved in several cooperations, is the same for each partner. Generalising each pair of partners cooperating in the same manner, for such systems of cooperations a kind of selfsimilarity is formalised. From an abstracting point of view, where only actions of some selected partners are considered, the complex system of all partners behaves like the smaller subsystem of the selected partners. For verification purposes, so called uniformly parameterised safety properties are defined. Such properties can be used to express privacy policies as well as security and dependability requirements. It is shown, how the parameterised problem of verifying such a property is reduced by self-similarity to a finite state problem. Keywords-cooperations as prefix closed languages; abstractions of system behaviour; self-similarity in systems of cooperations; privacy policies; uniformly parameterised safety properties

    A framework for automatic security controller generation

    Get PDF
    This paper concerns the study, the development and the synthesis of mechanisms for guaranteeing the security of complex systems, i.e., systems composed by several interactive components. A complex system under analysis is described as an open system, in which a certain component has an unspecified behavior (not fixed in advance). Regardless of the unspecified behavior, the system should work properly, e.g., should satisfy a certain property. Within this formal approach, we propose techniques to enforce properties and synthesize controller programs able to guarantee that, for all possible behaviors of the unspecified component, the overall system results secure. For performing this task, we use techniques able to provide us necessary and sufficient conditions on the behavior of this unspecified component to ensure the whole system is secure. Hence, we automatically synthesize the appropriate controller programs by exploiting satisfiability results for temporal logic. We contribute within the area of the enforcement of security properties by proposing a flexible and automated framework that goes beyond the definition of how a system should behave to work properly. Indeed, while the majority of related work focuses on the definition of monitoring mechanisms, we aid in the synthesis of enforcing techniques. Moreover, we present a tool for the synthesis of secure systems able to generate a controller program directly executable on real devices as smart phones

    Considerations in Assuring Safety of Increasingly Autonomous Systems

    Get PDF
    Recent technological advances have accelerated the development and application of increasingly autonomous (IA) systems in civil and military aviation. IA systems can provide automation of complex mission tasks-ranging across reduced crew operations, air-traffic management, and unmanned, autonomous aircraft-with most applications calling for collaboration and teaming among humans and IA agents. IA systems are expected to provide benefits in terms of safety, reliability, efficiency, affordability, and previously unattainable mission capability. There is also a potential for improving safety by removal of human errors. There are, however, several challenges in the safety assurance of these systems due to the highly adaptive and non-deterministic behavior of these systems, and vulnerabilities due to potential divergence of airplane state awareness between the IA system and humans. These systems must deal with external sensors and actuators, and they must respond in time commensurate with the activities of the system in its environment. One of the main challenges is that safety assurance, currently relying upon authority transfer from an autonomous function to a human to mitigate safety concerns, will need to address their mitigation by automation in a collaborative dynamic context. These challenges have a fundamental, multidimensional impact on the safety assurance methods, system architecture, and V&V capabilities to be employed. The goal of this report is to identify relevant issues to be addressed in these areas, the potential gaps in the current safety assurance techniques, and critical questions that would need to be answered to assure safety of IA systems. We focus on a scenario of reduced crew operation when an IA system is employed which reduces, changes or eliminates a human's role in transition from two-pilot operations

    Compositional Analysis for Verification of Parameterized Systems

    No full text
    Many safety-critical systems that have been considered by the verification community are parameterized by the number of concurrent components in the system, and hence describe an infinite family of systems. Traditional model checking techniques can only be used to verify specific instances of this family. In this paper, we present a technique based on compositional model checking and program analysis for automatic verification of infinite families of systems. The technique views a parameterized system as an expression in a process algebra (CCS) and interprets this expression over a domain of formulas (modal mu-calculus), considering a process as a property transformer. The transformers are constructed using partial model checking techniques. At its core, our technique solves the verification problem by finding the limit of a chain of formulas. We present a widening operation to find such a limit for properties expressible in a subset of modal mu-calculus. We describe the verification of a number of parameterized systems using our technique to demonstrate its utility
    corecore