12 research outputs found

    Security and Privacy of Protocols and Software with Formal Methods

    Get PDF
    International audienceThe protection of users' data conforming to best practice and legislation is one of the main challenges in computer science. Very often, large-scale data leaks remind us that the state of the art in data privacy and anonymity is severely lacking. The complexity of modern systems make it impossible for software architect to create secure software that correctly implements privacy policies without the help of automated tools. The academic community needs to invest more effort in the formal modelization of security and anonymity properties, providing a deeper understanding of the underlying concepts and challenges and allowing the creation of automated tools to help software architects and developers. This track provides numerous contributions to the formal modeling of security and anonymity properties and the creation of tools to verify them on large-scale software projects

    Information Flow for Security in Control Systems

    Full text link
    This paper considers the development of information flow analyses to support resilient design and active detection of adversaries in cyber physical systems (CPS). The area of CPS security, though well studied, suffers from fragmentation. In this paper, we consider control systems as an abstraction of CPS. Here, we extend the notion of information flow analysis, a well established set of methods developed in software security, to obtain a unified framework that captures and extends system theoretic results in control system security. In particular, we propose the Kullback Liebler (KL) divergence as a causal measure of information flow, which quantifies the effect of adversarial inputs on sensor outputs. We show that the proposed measure characterizes the resilience of control systems to specific attack strategies by relating the KL divergence to optimal detection techniques. We then relate information flows to stealthy attack scenarios where an adversary can bypass detection. Finally, this article examines active detection mechanisms where a defender intelligently manipulates control inputs or the system itself in order to elicit information flows from an attacker's malicious behavior. In all previous cases, we demonstrate an ability to investigate and extend existing results by utilizing the proposed information flow analyses

    When to Move to Transfer Nets On the limits of Petri nets as models for process calculi

    Get PDF
    International audiencePierpaolo Degano has been an influential pioneer in the investigation of Petri nets as models for concurrent process calculi (see e.g. the well-known seminal work by Degano–De Nicola–Montanari also known as DDM88). In this paper, we address the limits of classical Petri nets by discussing when it is necessary to move to the so-called Transfer nets, in which transitions can also move to a target place all the tokens currently present in a source place. More precisely, we consider a simple calculus of processes that interact by generating/consuming messages into/from a shared repository. For this calculus classical Petri nets can faithfully model the process behavior. Then we present a simple extension with a primitive allowing processes to atomically rename all the data of a given kind. We show that with the addition of such primitive it is necessary to move to Transfer nets to obtain a faithful modeling

    Categorical models of Linear Logic with fixed points of formulas

    Full text link
    We develop a denotational semantics of muLL, a version of propositional Linear Logic with least and greatest fixed points extending David Baelde's propositional muMALL with exponentials. Our general categorical setting is based on the notion of Seely category and on strong functors acting on them. We exhibit two simple instances of this setting. In the first one, which is based on the category of sets and relations, least and greatest fixed points are interpreted in the same way. In the second one, based on a category of sets equipped with a notion of totality (non-uniform totality spaces) and relations preserving them, least and greatest fixed points have distinct interpretations. This latter model shows that muLL enjoys a denotational form of normalization of proofs.Comment: arXiv admin note: text overlap with arXiv:1906.0559

    Cost Automata, Safe Schemes, and Downward Closures

    Full text link
    Higher-order recursion schemes are an expressive formalism used to define languages of possibly infinite ranked trees. They extend regular and context-free grammars, and are equivalent to simply typed λY\lambda Y-calculus and collapsible pushdown automata. In this work we prove, under a syntactical constraint called safety, decidability of the model-checking problem for recursion schemes against properties defined by alternating B-automata, an extension of alternating parity automata for infinite trees with a boundedness acceptance condition. We then exploit this result to show how to compute downward closures of languages of finite trees recognized by safe recursion schemes.Comment: accepted at ICALP'2

    Préservation de l'opacité par raffinement de systèmes spécifiés par des chaînes de Markov discrètes à intervalles

    Get PDF
    RÉSUMÉ Les méthodes formelles permettent de modéliser et concevoir des systèmes informatiques critiques, notamment dans les domaines à fort risque humain que sont les transports de personne ou les centrales énergétiques, par exemple. L'une des méthodes de conception est celle dite de raffinements successifs, étapes lors desquelles les spécifications du système sont ajustées afin que le produit final soit le plus conforme possible aux exigences initiales. Le principe du raffinement est tel qu'il ne doit pas être destructif : le modèle raffiné doit vérifier au moins les mêmes requis déjà validés par le modèle précédent - par exemple, l'absence de blocage, ou la terminaison du programme dans un état acceptant. Parmi ces requis, le système doit parfois valider des requis non-fonctionnels, tels que des propriétés de sécurité. Notamment, on se penche davantage sur la propriété d'opacité libérale. Pour modéliser les systèmes informatiques ainsi que de tels requis non-fonctionnels, on a besoin de méthodes quantitatives. Ainsi, nous choisissons comme cadre théorique le modèle de la IDTMC. Ce modèle a pour intérêt d'avoir un aspect non-déterministe. En réalité, c'est une extension du modèle de PTS : en ce sens, on considère qu'une IDTMC représente une spécification, que l'on peut implémenter par un PTS. Les PTS eux-mêmes sont des modèles probabilistes, qui permettent la mesure de propriétés quantitatives. Le second avantage de ce type de modèle est l'existence de trois types de raffinement : fort, faible et complet. La problématique principale liée au raffinement de systèmes sécurisés est la suivante : le fait qu'une spécification vérifie une propriété de sécurité donnée n'est pas une condition nécessaire au fait que son raffinement la vérifie également. Le but est donc de trouver, dans notre cadre théorique, une notion de raffinement qui préserve la propriété de sécurité que l'on étudie. L'opacité est une propriété de sécurité introduite avec le modèle du LTS, puis étendue aux PTS : elle traduit la capacité d'un observateur extérieur à déduire l'état d'un prédicat secret en observant uniquement la partie publique des exécutions du programme. Sa première définition est une définition binaire ; en étendant la notion aux PTS, on introduit un aspect probabiliste en définissant l'opacité libérale, qui mesure la non-opacité du système, et l'opacité restrictive, qui mesure son opacité effective. Il est alors possible d'étendre à nouveau ces notions aux IDTMC : il suffit de calculer l'opacité dans le pire des cas pour l'ensemble des implémentations des IDTMC. Ainsi, nous prouvons les résultats suivants. Tout d'abord, on prouve que l'opacité libérale dans une IDTMC non-modale, c'est-à-dire complètement définie, se calcule en un temps fini, doublement exponentiel. Nous proposons un algorithme de calcul. De plus, on prouve qu'il est possible d'approcher l'opacité libérale dans une IDTMC dans le cas général, en un temps doublement exponentiel également. Nous proposons comme contribution originale une extension de l'algorithme de calcul du cas non-modal, et nous prouvons sa correction. Enfin, on prouve que l'opacité libérale dans une spécification est préservée après raffinement faible, ce qui généralise un résultat similaire mais qui ne considérait que le raffinement fort. En définitive, nous réalisons une preuve de concept destinée à être reproduite pour d'autres modèles et propriétés de sécurité similaires, telles que les Propriétés Rationnelles de Flux d'Information (RIFP) dont est issue l'opacité.----------ABSTRACT Formal methods can help to design any computer system - softwares, protocols, architectures, etc. Indeed, developping a system usually consists in refining it. The refined system is then a more precise one, with some more features. Thus, all these stages lead to a final product which is a working implementation of the initial specification. The key issue is as follows: each refined system must at least verify all the properties verified by the previous one. This must be the case for behaviour properties (like the absence of any deadlock) and for security properties. This issue is relatively easily resolved when it is about usual behaviour properties, but security is trickier to model. Therefore, one cannot ensure the fact that a refined system verifies the same security properties as the previous system. This essay aims to highlight a particular security property, opacity, for which we prove that it is preserved when a system is refined. Opacity is linked to the probability for a passive external observer to know the content of a secret, only by observing the public outputs of the system. The framework is as follows. In order to modelize our specifications, we define the Interval Discrete-Time Markov Chain (IDTMC), which is a generalisation of the Probabilistic Transition System (PTS). The probabilistic aspect is a way to introduce quantitative measurement on our models. Since IDTMC are non-deterministic, they carry a higher layer of abstraction than the PTS model. On this framework, one can define three types of refinement: strong, weak and thorough. Since opacity is already defined on PTSs, we define its extension to IDTMC. Particularly, one can differentiate liberal opacity (the measure of non-opacity) from restrictive opacity (the measure of effective opacity). The extension is directly defined by stating the fact that the opacity of a secret in a IDTMC is the worst case among all the PTSs that implement this specification. Then we prove the following theorems. First, if we consider a non-modal IDTMC, i.e. a specification for which each transition has a non-zero probability, then the liberal opacity of any secret is computable in 2EXP-time. We provide an algorithm to compute this value. Then, for the general case, we prove that the liberal opacity can be approximate in 2EXP-time. This original contribution comes with an extension of the previous algorithm, for which we prove its correctness. Finally, we solve the main issue of this essay: liberal opacity in a specification is preserved when the system is weakly refined. This contribution expands a similar result, which only considered strong refinement. These results lead to a proof of concept for the fact that secured systems can be refined and keep their security properties, for a certain type of properties. This can be especially generalised to all Rational Information Flow Properties (RIFP)

    Synthesizing stream control

    Get PDF
    For the management of reactive systems, controllers must coordinate time, data streams, and data transformations, all joint by the high level perspective of their control flow. This control flow is required to drive the system correctly and continuously, which turns the development into a challenge. The process is error-prone, time consuming, unintuitive, and costly. An attractive alternative is to synthesize the system instead, where the developer only needs to specify the desired behavior. The synthesis engine then automatically takes care of all the technical details. However, while current algorithms for the synthesis of reactive systems are well-suited to handle control, they fail on complex data transformations due to the complexity of the comparably large data space. Thus, to overcome the challenge of explicitly handling the data we must separate data and control. We introduce Temporal Stream Logic (TSL), a logic which exclusively argues about the control of the controller, while treating data and functional transformations as interchangeable black-boxes. In TSL it is possible to specify control flow properties independently of the complexity of the handled data. Furthermore, with TSL at hand a synthesis engine can check for realizability, even without a concrete implementation of the data transformations. We present a modular development framework that first uses synthesis to identify the high level control flow of a program. If successful, the created control flow then is extended with concrete data transformations in order to be compiled into a final executable. Our results also show that the current synthesis approaches cannot replace existing manual development work flows immediately. During the development of a reactive system, the developer still may use incomplete or faulty specifications at first, that need the be refined after a subsequent inspection. In the worst case, constraints are contradictory or miss important assumptions, which leads to unrealizable specifications. In both scenarios, the developer needs additional feedback from the synthesis engine to debug errors for finally improving the system specification. To this end, we explore two further possible improvements. On the one hand, we consider output sensitive synthesis metrics, which allow to synthesize simple and well structured solutions that help the developer to understand and verify the underlying behavior quickly. On the other hand, we consider the extension of delay, whose requirement is a frequent reason for unrealizability. With both methods at hand, we resolve the aforementioned problems and therefore help the developer in the development phase with the effective creation of a safe and correct reactive system.Um reaktive Systeme zu regeln müssen Steuergeräte Zeit, Datenströme und Datentransformationen koordinieren, die durch den übergeordneten Kontrollfluss zusammengefasst werden. Die Aufgabe des Kontrollflusses ist es das System korrekt und dauerhaft zu betreiben. Die Entwicklung solcher Systeme wird dadurch zu einer Herausforderung, denn der Prozess ist fehleranfällig, zeitraubend, unintuitiv und kostspielig. Eine attraktive Alternative ist es stattdessen das System zu synthetisieren, wobei der Entwickler nur das gewünschte Verhalten des Systems festlegt. Der Syntheseapparat kümmert sich dann automatisch um alle technischen Details. Während aktuelle Algorithmen für die Synthese von reaktiven Systemen erfolgreich mit dem Kontrollanteil umgehen können, versagen sie jedoch, sobald komplexe Datentransformationen hinzukommen, aufgrund der Komplexität des vergleichsweise großen Datenraums. Daten und Kontrolle müssen demnach getrennt behandelt werden, um auch große Datenräumen effizient handhaben zu können. Wir präsentieren Temporal Stream Logic (TSL), eine Logik die ausschließlich die Kontrolle einer Steuerung betrachtet, wohingegen Daten und funktionale Datentransformationen als austauschbare Blackboxen gehandhabt werden. In TSL ist es möglich Kontrollflusseigenschaften unabhängig von der Komplexität der zugrunde liegenden Daten zu beschreiben. Des Weiteren kann ein auf TSL beruhender Syntheseapparat die Realisierbarkeit einer Spezifikation prüfen, selbst ohne die konkreten Implementierungen der Datentransformationen zu kennen. Wir präsentieren ein modulares Grundgerüst für die Entwicklung. Es verwendet zunächst den Syntheseapparat um den übergeordneten Kontrollfluss zu erzeugen. Ist dies erfolgreich, so wird der resultierende Kontrollfluss um die konkreten Implementierungen der Datentransformationen erweitert und anschließend zu einer ausführbare Anwendung kompiliert. Wir zeigen auch auf, dass bisherige Syntheseverfahren bereits existierende manuelle Entwicklungsprozesse noch nicht instantan ersetzen können. Im Verlauf der Entwicklung ist es auch weiterhin möglich, dass der Entwickler zunächst unvollständige oder fehlerhafte Spezifikationen erstellt, welche dann erst nach genauerer Betrachtung des synthetisierten Systems weiter verbessert werden können. Im schlimmsten Fall sind Anforderungen inkonsistent oder wichtige Annahmen über das Verhalten fehlen, was zu unrealisierbaren Spezifikationen führt. In beiden Fällen benötigt der Entwickler zusätzliche Rückmeldungen vom Syntheseapparat, um Fehler zu identifizieren und die Spezifikation schlussendlich zu verbessern. In diesem Zusammenhang untersuchen wir zwei mögliche Erweiterungen. Zum einen betrachten wir ausgabeabhängige Metriken, die es dem Entwickler erlauben einfache und wohlstrukturierte Lösungen zu synthetisieren die verständlich sind und deren Verhalten einfach zu verifizieren ist. Zum anderen betrachten wir die Erweiterung um Verzögerungen, welche eine der Hauptursachen für Unrealisierbarkeit darstellen. Mit beiden Methoden beheben wir die jeweils zuvor genannten Probleme und helfen damit dem Entwickler während der Entwicklungsphase auch wirklich das reaktive System zu kreieren, dass er sich auch tatsächlich vorstellt
    corecore