12 research outputs found

    Through Modeling to Synthesis of Security Automata

    Get PDF
    AbstractWe define a set of process algebra operators, that we call controller operators, able to mimic the behavior of security automata introduced by Schneider in [Schneider, F. B., Enforceable security policies, ACM Transactions on Information and System Security 3 (2000), pp. 30–50] and by Ligatti and al. in [Bauer, L., J. Ligatti and D. Walker, More enforceable security policies, in: I. Cervesato, editor, Foundations of Computer Security: proceedings of the FLoC'02 workshop on Foundations of Computer Security (2002), pp. 95–104]. Security automata are mechanisms for enforcing security policies that specify acceptable executions of programs.Here we give the semantics of four controllers that act by monitoring possible un-trusted component of a system in order to enforce certain security policies. Moreover, exploiting satisfiability results for temporal logic, we show how to automatically build these controllers for a given security policy

    Model and Synthesize Security Automata

    Get PDF
    We define a set of process algebra operators (controllers) that mimic the security automata introduced by Schneider in [18] and by Ligatti and al. in [4], respectively. We also show how to automatically build these controllers for given security policies

    Spot the Difference: Secure Multi-Execution and Multiple Facets

    Get PDF
    International audienceWe propose a rigorous comparison of two widely known dynamic information flow mechanisms: Secure Multi-Execution (SME) and Multiple Facets (MF). Informally, it is believed that MF simulates SME while providing better performance. Formally, it is well known that SME has stronger soundness guarantees than MF. Surprisingly, we discover that even if we approach them to enforce the same soundness guarantees, they are still different. While modeling them in the same language, we are able to precisely identify the features of the semantics that lead to their differences. In the process of comparing them, we also discovered four new mechanisms that share features of MF and SME. We prove that one of them simulates SME, which was falsely believed to be true for MF

    Protection against overflow attacks

    Get PDF
    Buffer overflow happens when the runtime process loads more data into the buffer than its design capacity. Bad programming style and lack of security concern cause overflow vulnerabilities in almost all applications on all the platforms;Buffer overflow attack can target any data in stack or heap. The current solutions ignore the overflowed targets other than return address. Function pointer, for example, is a possible target of overflow attack. By overflowing the function pointer in stack or heap, the attacker could redirect the program control flow when the function pointer is dereferenced to make a function call. To address this problem we implemented protection against overflow attacks targeting function pointers. During compiling phase, our patch collects the set of the variables that might change the value of function pointers at runtime. During running phase, the set is protected by encryption before the value is saved in memory and decryption before the value is used. The function pointer protection will cover all the overflow attacks targeting function pointers;To further extend the protection to cover all possible overflowing targets, we implemented an anomaly detection which checks the program runtime behavior against control flow checking automata. The control flow checking automata are derived from the source codes of the application. A trust value is introduced to indicate how well the runtime program matches the automata. The attacks modifying the program behavior within the source codes could be detected;Both function pointer protection and control flow checking are compiler patches which require the access to source codes. To cover buffer overflow attack and enforce security policies regardless of source codes, we implemented a runtime monitor with stream automata. Stream automata extend the concept of security automata and edit automata. The monitor works on the interactions between two virtual entities: system and program. The security policies are expressed in stream automata which perform Truncation, Suppression, Insertion, Metamorphosis, Forcing, and Two-Way Forcing on the interactions. We implement a program/operating system monitor to detect overflow attack and a local network/Internet monitor to enforce honeywall policies

    Various Notions of Opacity Verified and Enforced at Runtime

    Get PDF
    In this paper, we are interested in the validation of opacity where opacity means the impossibility for an attacker to retrieve the value of a secret in a system of interest. Roughly speaking, ensuring opacity provides confidentiality of a secret on the system that must not leak to an attacker. More specifically, we study how we can verify and enforce, at system runtime, several levels of opacity. Besides already considered notions of opacity, we also introduce a new one that provides a stronger level of confidentiality

    Towards the automation of vulnerability detection in source code

    Get PDF
    Software vulnerability detection, which involves security property specification and verification, is essential in assuring the software security. However, the process of vulnerability detection is labor-intensive, time-consuming and error-prone if done manually. In this thesis, we present a hybrid approach, which utilizes the power of static and dynamic analysis for performing vulnerability detection in a systematic way. The key contributions of this thesis are threefold. first, a vulnerability detection framework, which supports security property specification, potential vulnerability detection, and dynamic verification, is proposed. Second, an investigation of test data generation for dynamic verification is conducted. Third, the concept of reducing security property verification to reachability is introduced

    Enforcing security policies with runtime monitors

    Get PDF
    Le monitorage (monitoring) est une approche pour la sécurisation du code qui permet l'exécution d’un code potentiellement malicieux en observant son exécution, et en intervenant au besoin pour éviter une violation d’une politique de sécurité. Cette méthode a plusieurs applications prometteuses, notamment en ce qui a trait à la sécurisation du code mobile. Les recherches académiques sur le monitorage se sont généralement concentrées sur deux questions. La première est celle de délimiter le champ des politiques de sécurité applicables par des moniteurs opérant sous différentes contraintes. La seconde question est de construire des méthodes permettant d’insérer un moniteur dans un programme, ce qui produit un nouveau programme instrumenté qui respecte la politique de sécurité appliquée par ce moniteur. Mais malgré le fait qu’une vaste gamme de moniteurs a été étudiée dans la littérature, les travaux sur l’insertion des moniteurs dans les programmes se sont limités à une classe particulière de moniteurs, qui sont parmi les plus simples et les plus restreint quant à leur champ de politiques applicables. Cette thèse étend les deux avenues de recherches mentionnées précédemment et apporte un éclairage nouveau à ces questions. Elle s’attarde en premier lieu à étendre le champ des politiques applicables par monitorage en développabt une nouvelle approche pour l’insertion d’un moniteur dans un programme. En donnant au moniteur accès à un modèle du comportement du programme, l’étude montre que le moniteur acquiert la capacité d’appliquer une plus vaste gamme de politiques de sécurité. De plus, les recherches ont aussi d´emontré qu’un moniteur capable de transformer l’exécution qu’il surveille est plus puissant qu’un moniteur qui ne possède pas cette capacité. Naturellement, des contraintes doivent être imposées sur cette capacité pour que l’application de la politique soit cohérente. Autrement, si aucune restriction n’est imposée au moniteur, n’importe quelle politique devient applicable, mais non d’une manière utile ou désirable. Dans cette étude, nous proposons deux nouveaux paradigmes d’application des politiques de sécurité qui permettent d’incorporer des restrictions raisonnables imposées sur la capacité des moniteurs de transformer les exécutions sous leur contrôle. Nous étudions le champ des politiques applicables avec ces paradigmes et donnons des exemples de politiques réelles qui peuvent être appliquées à l’aide de notre approche.Execution monitoring is an approach that seeks to allow an untrusted code to run safely by observing its execution and reacting if need be to prevent a potential violation of a user-supplied security policy. This method has many promising applications, particularly with respect to the safe execution of mobile code. Academic research on monitoring has generally focused on two questions. The first, relates to the set of policies that can be enforced by monitors under various constraints and the conditions under which this set can be extended. The second question deals with the way to inline a monitor into an untrusted or potentially malicious program in order to produce a new instrumented program that provably respects the desired security policy. This study builds on the two strands of research mentioned above and brings new insights to this study. It seeks, in the first place, to increase the scope of monitorable properties by suggesting a new approach of monitor inlining. By drawing on an a priori model of the program’s possible behavior, we develop a monitor that can enforce a strictly larger set of security properties. Furthermore, longstanding research has showed that a monitor that is allowed to transform its input is more powerful than one lacking this ability. Naturally, this ability must be constrained for the enforcement to be meaningful. Otherwise, if the monitor is given too broad a leeway to transform valid and invalid sequences, any property can be enforced, but not in a way that is useful or desirable. In this study, we propose two new enforcement paradigms which capture reasonable restrictions on a monitor’s ability to alter its input. We study the set of properties enforceable if these enforcement paradigms are used and give examples of real-life security policies that can be enforced using our approach

    Enforcing non-safety security policies with program monitors

    No full text
    Abstract. We consider the enforcement powers of program monitors, which intercept security-sensitive actions of a target application at run time and take remedial steps whenever the target attempts to execute a potentially dangerous action. A common belief in the security community is that program monitors, regardless of the remedial steps available to them when detecting violations, can only enforce safety properties. We formally analyze the properties enforceable by various program monitors and find that although this belief is correct when considering monitors with simple remedial options, it is incorrect for more powerful monitors that can be modeled by edit automata. We define an interesting set of properties called infinite renewal properties and demonstrate how, when given any reasonable infinite renewal property, to construct an edit automaton that provably enforces that property. We analyze the set of infinite renewal properties and show that it includes every safety property, some liveness properties, and some properties that are neither safety nor liveness.
    corecore