7,566 research outputs found

    Safety monitoring for autonomous systems: interactive elicitation of safety rules

    Get PDF
    Un moniteur de sécurité actif est un mécanisme indépendant qui est responsable de maintenir le système dans un état sûr, en cas de situation dangereuse. Il dispose d'observations (capteurs) et d'interventions (actionneurs). Des règles de sécurité sont synthétisées, à partir des résultats d'une analyse de risques, grâce à l'outil SMOF (Safety MOnitoring Framework), afin d'identifier quelles interventions appliquer quand une observation atteint une valeur dangereuse. Les règles de sécurité respectent une propriété de sécurité (le système reste das un état sûr) ainsi que des propriétés de permissivité, qui assurent que le système peut toujours effectuer ses tâches. Ce travail se concentre sur la résolution de cas où la synthèse échoue à retourner un ensemble de règles sûres et permissives. Pour assister l'utilisateur dans ces cas, trois nouvelles fonctionnalités sont introduites et développées. La première adresse le diagnostique des raisons pour lesquelles une règle échoue à respecter les exigences de permissivité. La deuxième suggère des interventions de sécurité candidates à injecter dans le processus de synthèse. La troisième permet l'adaptation des exigences de permissivités à un ensemble de tâches essentielles à préserver. L'utilisation des ces trois fonctionnalités est discutée et illustrée sur deux cas d'étude industriels, un robot industriel de KUKA et un robot de maintenance de Sterela.An active safety monitor is an independent mechanism that is responsible for keeping the system in a safe state, should a hazardous situation occur. Is has observations (sensors) and interventions (actuators). Safety rules are synthesized from the results of the hazard analysis, using the tool SMOF (Safety MOnitoring Framework), in order to identify which interventions to apply for dangerous observations values. The safety rules enforce a safety property (the system remains in a safe state) and some permissiveness properties, ensuring that the system can still perform its tasks. This work focuses on solving cases where the synthesis fails to return a set of safe and permissive rules. To assist the user in these cases, three new features are introduced and developed. The first one addresses the diagnosis of why the rules fail to fulfill a permissiveness requirement. The second one suggests candidate safety interventions to inject into the synthesis process. The third one allows the tuning of the permissiveness requirements based on a set of essential functionalities to maintain. The use of these features is discussed and illustrated on two industrial case studies, a manufacturing robot from KUKA and a maintenance robot from Sterela

    Synthesis of Covert Actuator Attackers for Free

    Full text link
    In this paper, we shall formulate and address a problem of covert actuator attacker synthesis for cyber-physical systems that are modelled by discrete-event systems. We assume the actuator attacker partially observes the execution of the closed-loop system and is able to modify each control command issued by the supervisor on a specified attackable subset of controllable events. We provide straightforward but in general exponential-time reductions, due to the use of subset construction procedure, from the covert actuator attacker synthesis problems to the Ramadge-Wonham supervisor synthesis problems. It then follows that it is possible to use the many techniques and tools already developed for solving the supervisor synthesis problem to solve the covert actuator attacker synthesis problem for free. In particular, we show that, if the attacker cannot attack unobservable events to the supervisor, then the reductions can be carried out in polynomial time. We also provide a brief discussion on some other conditions under which the exponential blowup in state size can be avoided. Finally, we show how the reduction based synthesis procedure can be extended for the synthesis of successful covert actuator attackers that also eavesdrop the control commands issued by the supervisor.Comment: The paper has been accepted for the journal Discrete Event Dynamic System

    Supervisory Control and Analysis of Partially-observed Discrete Event Systems

    Get PDF
    Nowadays, a variety of real-world systems fall into discrete event systems (DES). In practical scenarios, due to facts like limited sensor technique, sensor failure, unstable network and even the intrusion of malicious agents, it might occur that some events are unobservable, multiple events are indistinguishable in observations, and observations of some events are nondeterministic. By considering various practical scenarios, increasing attention in the DES community has been paid to partially-observed DES, which in this thesis refer broadly to those DES with partial and/or unreliable observations. In this thesis, we focus on two topics of partially-observed DES, namely, supervisory control and analysis. The first topic includes two research directions in terms of system models. One is the supervisory control of DES with both unobservable and uncontrollable events, focusing on the forbidden state problem; the other is the supervisory control of DES vulnerable to sensor-reading disguising attacks (SD-attacks), which is also interpreted as DES with nondeterministic observations, addressing both the forbidden state problem and the liveness-enforcing problem. Petri nets (PN) are used as a reference formalism in this topic. First, we study the forbidden state problem in the framework of PN with both unobservable and uncontrollable transitions, assuming that unobservable transitions are uncontrollable. For ordinary PN subject to an admissible Generalized Mutual Exclusion Constraint (GMEC), an optimal on-line control policy with polynomial complexity is proposed provided that a particular subnet, called observation subnet, satisfies certain conditions in structure. It is then discussed how to obtain an optimal on-line control policy for PN subject to an arbitrary GMEC. Next, we still consider the forbidden state problem but in PN vulnerable to SD-attacks. Assuming the control specification in terms of a GMEC, we propose three methods to derive on-line control policies. The first two lead to an optimal policy but are computationally inefficient for large-size systems, while the third method computes a policy with timely response even for large-size systems but at the expense of optimality. Finally, we investigate the liveness-enforcing problem still assuming that the system is vulnerable to SD-attacks. In this problem, the plant is modelled as a bounded PN, which allows us to off-line compute a supervisor starting from constructing the reachability graph of the PN. Then, based on repeatedly computing a more restrictive liveness-enforcing supervisor under no attack and constructing a basic supervisor, an off-line method that synthesizes a liveness-enforcing supervisor tolerant to an SD-attack is proposed. In the second topic, we care about the verification of properties related to system security. Two properties are considered, i.e., fault-predictability and event-based opacity. The former is a property in the literature, characterizing the situation that the occurrence of any fault in a system is predictable, while the latter is a newly proposed property in the thesis, which describes the fact that secret events of a system cannot be revealed to an external observer within their critical horizons. In the case of fault-predictability, DES are modeled by labeled PN. A necessary and sufficient condition for fault-predictability is derived by characterizing the structure of the Predictor Graph. Furthermore, two rules are proposed to reduce the size of a PN, which allow us to analyze the fault-predictability of the original net by verifying that of the reduced net. When studying event-based opacity, we use deterministic finite-state automata as the reference formalism. Considering different scenarios, we propose four notions, namely, K-observation event-opacity, infinite-observation event-opacity, event-opacity and combinational event-opacity. Moreover, verifiers are proposed to analyze these properties

    Synthesis of opaque systems with static and dynamic masks

    Get PDF
    International audienceOpacity is a security property formalizing the absence of secret information leakage and we address in this paper the problem of synthesizing opaque systems. A secret predicate S over the runs of a system G is opaque to an external user having partial observability over G, if s/he can never infer from the observation of a run of G that the run belongs to S. We choose to control the observability of events by adding a device, called a mask, between the system G and the users. We first investigate the case of static partial observability where the set of events the user can observe is fixed a priori by a static mask. In this context, we show that checking whether a system is opaque is PSPACE-complete, which implies that computing an optimal static mask ensuring opacity is also a PSPACE-complete problem. Next, we introduce dynamic partial observability where the set of events the user can observe changes over time and is chosen by a dynamic mask.We show how to check that a system is opaque w.r.t. to a dynamic mask and also address the corresponding synthesis problem: given a system G and secret states S, compute the set of dynamic masks under which S is opaque. Our main result is that the set of such masks can be finitely represented and can be computed in EXPTIME and this is a lower bound. Finally we also address the problem of computing an optimal mask

    Enhancing Social Protection in the Apparel and Footwear Industry

    Get PDF
    This document is part of a digital collection provided by the Martin P. Catherwood Library, ILR School, Cornell University, pertaining to the effects of globalization on the workplace worldwide.  Special emphasis is placed on labor rights, working conditions, labor market changes, and union organizing.FLA_ASEPROLA_Enhancing_Social_Protection.pdf: 371 downloads, before Oct. 1, 2020

    The Growing Threat of Agroterrorism and Strategies for Agricultural Defense

    Get PDF
    Due to the dynamic nature of human conflict, non-traditional terror tactics have evolved to undermine the socioeconomic stability of targeted societies. Considering the landscape in which terrorists operate, emphasis on more subversive methods of biological terror have become prominent in recent decades. Agroterrorism, or the use of plant pathogens to infect a nation’s cultivated crops, is an emerging topic due to its threat to global food security and economic stability. Although emergency preparedness objectives have been enacted at national, state, and even local levels, preemptive measures can no longer remain the sole responsibility of intelligence and law enforcement agencies. The agricultural and scientific communities are responsible for collaboration to improve security and pioneer new methods of disease resistance in susceptible crops. Plant immunology is an expanding field which explores the molecular defense mechanisms innately present within the plant kingdom and provides insight concerning novel methods of boosting the immunity of susceptible crops to existing and emerging pathogenic agents. This thesis serves to define the threat of agroterrorism from a national security and scientific perspective, identify notable plant pathogens, provide a brief survey of plant immunology, and discuss topics which can aid scientists, policymakers, and growers in efforts to secure the global food supply from those who would cause harm

    Through Modeling to Synthesis of Security Automata

    Get PDF
    AbstractWe define a set of process algebra operators, that we call controller operators, able to mimic the behavior of security automata introduced by Schneider in [Schneider, F. B., Enforceable security policies, ACM Transactions on Information and System Security 3 (2000), pp. 30–50] and by Ligatti and al. in [Bauer, L., J. Ligatti and D. Walker, More enforceable security policies, in: I. Cervesato, editor, Foundations of Computer Security: proceedings of the FLoC'02 workshop on Foundations of Computer Security (2002), pp. 95–104]. Security automata are mechanisms for enforcing security policies that specify acceptable executions of programs.Here we give the semantics of four controllers that act by monitoring possible un-trusted component of a system in order to enforce certain security policies. Moreover, exploiting satisfiability results for temporal logic, we show how to automatically build these controllers for a given security policy
    • …
    corecore