219 research outputs found

    Formal verification of safety-critical user interfaces: a space system case study

    Get PDF
    Safe operation of safety critical systems depends on appropriate interactions between the human operator and the computer system. Specification of such safety-critical systems is fundamental to enable exhaustive and automated analysis of operator system interaction. In this paper we present a structured, comprehensive and computer-aided approach to formally specify and verify user interfaces based on model checking techniques.J.C. Campos is funded by project ref. NORTE-07-0124-FEDER-000062 co-financed by the North Portugal Regional Operational Programme (ON.2 – O Novo Norte), under the National Strategic Reference Framework (NSRF), through the European Regional Development Fund (ERDF), and by national funds, through the Portuguese foundation for science and technology (FCT)

    Self-Adaptive Role-Based Access Control for Business Processes

    Get PDF
    © 2017 IEEE. We present an approach for dynamically reconfiguring the role-based access control (RBAC) of information systems running business processes, to protect them against insider threats. The new approach uses business process execution traces and stochastic model checking to establish confidence intervals for key measurable attributes of user behaviour, and thus to identify and adaptively demote users who misuse their access permissions maliciously or accidentally. We implemented and evaluated the approach and its policy specification formalism for a real IT support business process, showing their ability to express and apply a broad range of self-adaptive RBAC policies

    Verification of Policies in Human Cyber-Physical Systems:the Role and Importance of Resilience

    Get PDF
    Cyber-physical systems (CPS) are characterised by interactions of physical and computational components. A CPS also interacts with its operational environment, and thus with other entities including humans. Humans are an important aspect of human CPS (HCPS) since they are responsible for using (e.g., administering) these types of system. Such interactions are usually expressed though access control policies, which in many cases (e.g., when performing critical operations) are required to support the property of resilience to cope with challenges to the normal operation of the HCPS. In this paper, we pinpoint the importance of resilience as a property in access control policies and we describe a mechanism to conduct its formal verification. Finally, we identify potential future directions in the verification of access control properties, complementary to resilience

    Simplifying the Formal Verification of Safety Requirements in Zone Controllers through Problem Frames and Constraints based Projection

    Get PDF
    Formal methods have been applied widely to verifying the safety requirements of Communication-Based Train Control (CBTC) systems, while the problem situations could be much simplified. In industrial practices of CBTC systems, however, huge complexity arises, which renders those methods nearly impossible to apply. In this paper, we aim to reduce the state space of formal verification problems in Zone Controller, a sub-system of a typical CBTC. We achieve the simplification goal by reducing the total number of device variables. To do this, two projection methods are proposed based on Problem Frames and constraints, respectively. The Problem Frames based method decomposes the system according to sub-properties through functional decomposition, whilst the constraints based projection method removes redundant variables. Our industrial case study demonstrates the feasibility though an evaluation, confirming that these two methods are effective in reducing the state spaces of complex verification problems in this application domain

    From HCI to software engineering and back

    Get PDF
    Methods to assess and ensure system usability are becoming increasingly important as market edge becomes less dependent on function and more dependent on ease of use, and as recognition increases that a user's failure to understand how an automated system works may jeapordise its safety. While ultimately only deployment of a system will prove its usability, a number of approaches to early analysis have been proposed that provide some ability to predict the usability and human-error proneness of the fielded system. The majority of these approaches are designed to be used by human factors specialists, require specific expertise that does not fall within the domain of software engineering and fall outside standard software development life cycles. However, amongst this number, some rigorous mathematical methods have been proposed as solutions to the more general problem of ensuring quality of system designs but with limited success. This paper discusses their limitations both in terms of the broader software engineering agenda and in terms of their effectiveness for usability analysis, the opportunities that they offer and discusses what might be done to make them more acceptable and effective. The paper positions those methods that have been effective against less formal usability analysis methods

    Extraction of Insider Attack Scenarios from a Formal Information System Modeling

    No full text
    International audienceThe early detection of potential threats during the modelling phase of a Secure Information System is required because it favours the design of a robust access control policy and the prevention of malicious behaviours during the system execution. This paper deals with internal attacks which can be made by people inside the organization. Such at- tacks are difficult to find because insiders have authorized system access and also may be familiar with system policies and procedures. We are in- terested in finding attacks which conform to the access control policy, but lead to unwanted states. These attacks are favoured by policies involving authorization constraints, which grant or deny access depending on the evolution of the functional Information System state. In this context, we propose to model functional requirements and their Role Based Access Control (RBAC) policies using B machines and then to formally reason on both models. In order to extract insider attack scenarios from these B specifications our approach first investigates symbolic behaviours. The use of a model-checking tool allows to exhibit, from a symbolic behaviour, an observable concrete sequence of operations that can be followed by an attacker. In this paper, we show how this combination of symbolic execution and model-checking allows to find out such insider attack sce- narios

    Modal Reactors

    Full text link
    Complex software systems often feature distinct modes of operation, each designed to handle a particular scenario that may require the system to respond in a certain way. Breaking down system behavior into mutually exclusive modes and discrete transitions between modes is a commonly used strategy to reduce implementation complexity and promote code readability. However, such capabilities often come in the form of self-contained domain specific languages or language-specific frameworks. The work in this paper aims to bring the advantages of modal models to mainstream programming languages, by following the polyglot coordination approach of Lingua Franca (LF), in which verbatim target code (e.g., C, C++, Python, Typescript, or Rust) is encapsulated in composable reactive components called reactors. Reactors can form a dataflow network, are triggered by timed as well as sporadic events, execute concurrently, and can be distributed across nodes on a network. With modal models in LF, we introduce a lean extension to the concept of reactors that enables the coordination of reactive tasks based on modes of operation. The implementation of modal reactors outlined in this paper generalizes to any LF-supported language with only modest modifications to the generic runtime system

    A Formal Component-Based Software Engineering Approach For Developing Trustworty Systems

    Get PDF
    Software systems are increasingly becoming ubiquitous, affecting the way we experience the world. Embedded software systems, especially those used in smart devices, have become an essential constituent of the technological infrastructure of modern societies. Such systems, in order to be trusted in society, must be proved to be trustworthy. Trustworthiness is a composite non-functional property that implies safety, timeliness, security, availability, and reliability. This thesis is a contribution to a rigorous development of systems in which trustworthiness property can be specified and formally verified. Developing trustworthy software systems that are complex and used by a large heterogeneous population of users is a challenging task. The component-based software engineering (CBSE) paradigm can provide an effective solution to address these challenges. However, none of the current component-based approaches can be used as is, because all of them lack the essential requirements for constructing trustworthy systems. The three contributions made in this thesis are intended to add to the expressive power needed to raise CBSE practices to a rigorous level for constructing formally verifiable trustworthy systems. The first contribution of the thesis is a formal definition of the trustworthy component model. The trustworthiness quality attributes are introduced as first class structural elements. The behavior of a component is automatically generated as an extended timed automata. A model checking technique is used to verify the properties of trustworthiness. A composition theory that preserves the properties of trustworthiness in a composition is presented. Conventional software engineering development processes are not suitable either for developing component-based systems or for developing trustworthy systems. In order to develop a component-based trustworthy system, the development process must be reuseoriented,component-oriented, and must integrate formal languages and rigorous methods in all phases of system life-cycle. The second contribution of the thesis is a software engineering process model that consists of several parallel tracks of activities including component development, component assessment, component reuse, and component-based system development. The central concern in all activities of this process is ensuring trustworthiness. The third and final contribution of the thesis is a development framework with a comprehensive set of tools supporting the spectrum of formal development activity from modeling to deployment. The proposed approach has been applied to several case studies in the domains of component-based development and safety-critical systems. The experience from the case studies confirms that the approach is suitable for developing large and complex trustworthy systems

    Runtime Verification of Deontic and Trust Models in Multiagent Interactions

    Get PDF
    In distributed open systems, such as multiagent systems, new interactions are constantly appearing and new agents are continuously joining or leaving. It is unrealistic to expect agents to automatically trust new interactions. It is also unrealistic to expect agents to refer to their users for help every time a new interaction is encountered. An agent should decide for itself whether a specific interaction with a given group of agents is suitable or not. This thesis presents a runtime verification mechanism for addressing this problem. Verifying multiagent systems has its challenges. It is hard to predict the reliability of interactions, in systems that are heavily influenced by autonomous agents, without having access to the agent specifications. Available verification mechanisms may roughly be divided into two categories: (1) those that verify interaction models independently of specific agents, and (2) those that verify agent models whose constraints shape the interactions. Interaction models are not sufficient when verifying dynamic properties that depend on the agents engaged in an interaction. On the other hand, verifying agent specifications, such as BDI models, is extremely inefficient. Specifications are usually not explicit enough, resulting in the verification of a massive number of permissible interactions. Furthermore, in open systems, an agent’s internal specification is usually not accessible for many reasons, including security and privacy. This thesis proposes a model checker that verifies a combination of a global interaction model and local deontic models. The deontic model may be viewed as a list of agent constraints that are deemed necessary to share and verify, such as the inability of the buyer to pay by credit card. The result is a lightweight, efficient, and powerful model checker that is capable of verifying rich properties of multiagent systems without the need for accessing agents’ internal specifications. Although the proposed model checker has potential for addressing a variety of problems, the trust domain receives special attention due to the critically of the trust issue in distributed open systems and the lack of reliable trust solutions. The thesis illustrates how a dynamic model checker, using deontic/trust models, can help agents decide whether the scenarios they wish to join are trustworthy or not. In summary, the main contribution of this research is in introducing interaction time verification for checking deontic and trust models multiagent interactions. When faced with new unexplored interactions, agents can verify whether joining a given interaction with a given set of collaborating agents would violate any of its constraints

    SIFT: Building an Internet of safe Things

    Get PDF
    As the number of connected devices explodes, the use scenarios of these devices and data have multiplied. Many of these scenarios, e.g., home automation, require tools beyond data visualizations, to express user intents and to ensure interactions do not cause undesired effects in the physical world. We present SIFT, a safety-centric programming platform for connected devices in IoT environments. First, to simplify programming, users express high-level intents in declarative IoT apps. The system then decides which sensor data and operations should be combined to satisfy the user requirements. Second, to ensure safety and compliance, the system verifies whether conflicts or policy violations can occur within or between apps. Through an office deployment, user studies, and trace analysis using a large-scale dataset from a commercial IoT app authoring platform, we demonstrate the power of SIFT and highlight how it leads to more robust and reliable IoT apps
    • …
    corecore