524 research outputs found

    A decidable policy language for history-based transaction monitoring

    Full text link
    Online trading invariably involves dealings between strangers, so it is important for one party to be able to judge objectively the trustworthiness of the other. In such a setting, the decision to trust a user may sensibly be based on that user's past behaviour. We introduce a specification language based on linear temporal logic for expressing a policy for categorising the behaviour patterns of a user depending on its transaction history. We also present an algorithm for checking whether the transaction history obeys the stated policy. To be useful in a real setting, such a language should allow one to express realistic policies which may involve parameter quantification and quantitative or statistical patterns. We introduce several extensions of linear temporal logic to cater for such needs: a restricted form of universal and existential quantification; arbitrary computable functions and relations in the term language; and a "counting" quantifier for counting how many times a formula holds in the past. We then show that model checking a transaction history against a policy, which we call the history-based transaction monitoring problem, is PSPACE-complete in the size of the policy formula and the length of the history. The problem becomes decidable in polynomial time when the policies are fixed. We also consider the problem of transaction monitoring in the case where not all the parameters of actions are observable. We formulate two such "partial observability" monitoring problems, and show their decidability under certain restrictions

    Enforcing security policies with runtime monitors

    Get PDF
    Le monitorage (monitoring) est une approche pour la sécurisation du code qui permet l'exécution d’un code potentiellement malicieux en observant son exécution, et en intervenant au besoin pour éviter une violation d’une politique de sécurité. Cette méthode a plusieurs applications prometteuses, notamment en ce qui a trait à la sécurisation du code mobile. Les recherches académiques sur le monitorage se sont généralement concentrées sur deux questions. La première est celle de délimiter le champ des politiques de sécurité applicables par des moniteurs opérant sous différentes contraintes. La seconde question est de construire des méthodes permettant d’insérer un moniteur dans un programme, ce qui produit un nouveau programme instrumenté qui respecte la politique de sécurité appliquée par ce moniteur. Mais malgré le fait qu’une vaste gamme de moniteurs a été étudiée dans la littérature, les travaux sur l’insertion des moniteurs dans les programmes se sont limités à une classe particulière de moniteurs, qui sont parmi les plus simples et les plus restreint quant à leur champ de politiques applicables. Cette thèse étend les deux avenues de recherches mentionnées précédemment et apporte un éclairage nouveau à ces questions. Elle s’attarde en premier lieu à étendre le champ des politiques applicables par monitorage en développabt une nouvelle approche pour l’insertion d’un moniteur dans un programme. En donnant au moniteur accès à un modèle du comportement du programme, l’étude montre que le moniteur acquiert la capacité d’appliquer une plus vaste gamme de politiques de sécurité. De plus, les recherches ont aussi d´emontré qu’un moniteur capable de transformer l’exécution qu’il surveille est plus puissant qu’un moniteur qui ne possède pas cette capacité. Naturellement, des contraintes doivent être imposées sur cette capacité pour que l’application de la politique soit cohérente. Autrement, si aucune restriction n’est imposée au moniteur, n’importe quelle politique devient applicable, mais non d’une manière utile ou désirable. Dans cette étude, nous proposons deux nouveaux paradigmes d’application des politiques de sécurité qui permettent d’incorporer des restrictions raisonnables imposées sur la capacité des moniteurs de transformer les exécutions sous leur contrôle. Nous étudions le champ des politiques applicables avec ces paradigmes et donnons des exemples de politiques réelles qui peuvent être appliquées à l’aide de notre approche.Execution monitoring is an approach that seeks to allow an untrusted code to run safely by observing its execution and reacting if need be to prevent a potential violation of a user-supplied security policy. This method has many promising applications, particularly with respect to the safe execution of mobile code. Academic research on monitoring has generally focused on two questions. The first, relates to the set of policies that can be enforced by monitors under various constraints and the conditions under which this set can be extended. The second question deals with the way to inline a monitor into an untrusted or potentially malicious program in order to produce a new instrumented program that provably respects the desired security policy. This study builds on the two strands of research mentioned above and brings new insights to this study. It seeks, in the first place, to increase the scope of monitorable properties by suggesting a new approach of monitor inlining. By drawing on an a priori model of the program’s possible behavior, we develop a monitor that can enforce a strictly larger set of security properties. Furthermore, longstanding research has showed that a monitor that is allowed to transform its input is more powerful than one lacking this ability. Naturally, this ability must be constrained for the enforcement to be meaningful. Otherwise, if the monitor is given too broad a leeway to transform valid and invalid sequences, any property can be enforced, but not in a way that is useful or desirable. In this study, we propose two new enforcement paradigms which capture reasonable restrictions on a monitor’s ability to alter its input. We study the set of properties enforceable if these enforcement paradigms are used and give examples of real-life security policies that can be enforced using our approach

    Computer Aided Verification

    Get PDF
    The open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency

    Computer Aided Verification

    Get PDF
    The open access two-volume set LNCS 11561 and 11562 constitutes the refereed proceedings of the 31st International Conference on Computer Aided Verification, CAV 2019, held in New York City, USA, in July 2019. The 52 full papers presented together with 13 tool papers and 2 case studies, were carefully reviewed and selected from 258 submissions. The papers were organized in the following topical sections: Part I: automata and timed systems; security and hyperproperties; synthesis; model checking; cyber-physical systems and machine learning; probabilistic systems, runtime techniques; dynamical, hybrid, and reactive systems; Part II: logics, decision procedures; and solvers; numerical programs; verification; distributed systems and networks; verification and invariants; and concurrency

    A Logical Framework for Reputation Systems

    No full text
    Reputation systems are meta systems that record, aggregate and distribute information about the past behaviour of principals in an application. Typically, these applications are large-scale open distributed systems where principals are virtually anonymous, and (a priori) have no knowledge about the trustworthiness of each other. Reputation systems serve two primary purposes: helping principals decide whom to trust, and providing an incentive for principals to well-behave. A logical policy-based framework for reputation systems is presented. In the framework, principals specify policies which state precise requirements on the past behaviour of other principals that must be fulfilled in order for interaction to take place. The framework consists of a formal model of behaviour, based on event structures; a declarative logical language for specifying properties of past behaviour; and efficient dynamic algorithms for checking whether a particular behaviour satisfies a property from the language. It is shown how the framework can be extended in several ways, most notably to encompass parameterized events and quantification over parameters. In an extended application, it is illustrated how the framework can be applied for dynamic history-based access control for safe execution of unknown and untrusted programs

    Computer Aided Verification

    Get PDF
    This open access two-volume set LNCS 10980 and 10981 constitutes the refereed proceedings of the 30th International Conference on Computer Aided Verification, CAV 2018, held in Oxford, UK, in July 2018. The 52 full and 13 tool papers presented together with 3 invited papers and 2 tutorials were carefully reviewed and selected from 215 submissions. The papers cover a wide range of topics and techniques, from algorithmic and logical foundations of verification to practical applications in distributed, networked, cyber-physical, and autonomous systems. They are organized in topical sections on model checking, program analysis using polyhedra, synthesis, learning, runtime verification, hybrid and timed systems, tools, probabilistic systems, static analysis, theory and security, SAT, SMT and decisions procedures, concurrency, and CPS, hardware, industrial applications

    The hidden regulation of carbon markets

    Get PDF
    "This article tracks the creation and maintenance of markets for emission rights and the role that law-creation plays within this process. From a recent example of a market creation -the European Emissions Trading Scheme (EU ETS)-, insights will be gained about the intrinsic and fundamental connections between market creation and bureaucratization. This process unfolds in a paradoxical way: The free-market hypothesis is, in fact, creating a demand for regulation, administration, and control. Law creation that is informed by the free-market hy-pothesis (the Law and Economics School in general, the EU Directive as a specific case), separates the "inside of the market" from the "outside of the market". This, firstly, causes a need for extra-administration at the "outside of the market" in order to resolve the uncertainty that emanates from the self-imposed requirement of leaving "the market itself" unregulated. And it, secondly, exposes the "rational actor" to an open and uncertain situation, which then leads to private regulative and administrative attempts at the "inside of the market". (author's abstract
    • …
    corecore