8 research outputs found
Verifying the Interplay of Authorization Policies and Workflow in Service-Oriented Architectures (Full version)
A widespread design approach in distributed applications based on the
service-oriented paradigm, such as web-services, consists of clearly separating
the enforcement of authorization policies and the workflow of the applications,
so that the interplay between the policy level and the workflow level is
abstracted away. While such an approach is attractive because it is quite
simple and permits one to reason about crucial properties of the policies under
consideration, it does not provide the right level of abstraction to specify
and reason about the way the workflow may interfere with the policies, and vice
versa. For example, the creation of a certificate as a side effect of a
workflow operation may enable a policy rule to fire and grant access to a
certain resource; without executing the operation, the policy rule should
remain inactive. Similarly, policy queries may be used as guards for workflow
transitions.
In this paper, we present a two-level formal verification framework to
overcome these problems and formally reason about the interplay of
authorization policies and workflow in service-oriented architectures. This
allows us to define and investigate some verification problems for SO
applications and give sufficient conditions for their decidability.Comment: 16 pages, 4 figures, full version of paper at Symposium on Secure
Computing (SecureCom09
Modularity for Security-Sensitive Workflows
An established trend in software engineering insists on using components
(sometimes also called services or packages) to encapsulate a set of related
functionalities or data. By defining interfaces specifying what functionalities
they provide or use, components can be combined with others to form more
complex components. In this way, IT systems can be designed by mostly re-using
existing components and developing new ones to provide new functionalities. In
this paper, we introduce a notion of component and a combination mechanism for
an important class of software artifacts, called security-sensitive workflows.
These are business processes in which execution constraints on the tasks are
complemented with authorization constraints (e.g., Separation of Duty) and
authorization policies (constraining which users can execute which tasks). We
show how well-known workflow execution patterns can be simulated by our
combination mechanism and how authorization constraints can also be imposed
across components. Then, we demonstrate the usefulness of our notion of
component by showing (i) the scalability of a technique for the synthesis of
run-time monitors for security-sensitive workflows and (ii) the design of a
plug-in for the re-use of workflows and related run-time monitors inside an
editor for security-sensitive workflows
Constraint Expressions and Workflow Satisfiability
A workflow specification defines a set of steps and the order in which those
steps must be executed. Security requirements and business rules may impose
constraints on which users are permitted to perform those steps. A workflow
specification is said to be satisfiable if there exists an assignment of
authorized users to workflow steps that satisfies all the constraints. An
algorithm for determining whether such an assignment exists is important, both
as a static analysis tool for workflow specifications, and for the construction
of run-time reference monitors for workflow management systems. We develop new
methods for determining workflow satisfiability based on the concept of
constraint expressions, which were introduced recently by Khan and Fong. These
methods are surprising versatile, enabling us to develop algorithms for, and
determine the complexity of, a number of different problems related to workflow
satisfiability.Comment: arXiv admin note: text overlap with arXiv:1205.0852; to appear in
Proceedings of SACMAT 201
Business Policy Modeling and Enforcement in Relational Database Systems
Database systems maintain integrity of the stored information by ensuring that modifications to the database comply with constraints designed by the administrators. As the number of users and applications sharing a common database increases, so does the complexity of the set of constraints that originate from higher level business processes. The lack of a systematic mechanism for integrating and reasoning about a diverse set of evolving and potentially interfering policies manifested as database level constraints makes corporate policy management within relational systems a chaotic process.
In this thesis we present a systematic method of mapping a broad set of process centric business policies onto database level constraints. We exploit the observation that the state of a database represents the union of all the states of every ongoing business process and thus establish a bijective relationship between progression in individual business processes and changes in the database state space. We propose graphical notations that are equivalent to integrity constraints specified in linear temporal logic of the past. Furthermore we demonstrate how this notation can accommodate a wide array of workflow patterns, can allow for multiple policy makers to implement their own process centric constraints independently using their own logical policy models, and can model check these constraints within the database system to detect potential conflicting constraints across several different business processes.
A major contribution of this thesis is that it bridges several different areas of research including database systems, temporal logics, model checking, and business workflow/policy management to propose an accessible method of integrating, enforcing, and reasoning about the consequences of process-centric constraints embedded in database systems. As a result, the task of ensuring that a database continuously complies with evolving business rules governed by hundreds of processes, which is traditionally handled by an army of database programmers regularly updating triggers and batch procedures, is made easier, more manageable, and more predictable
Tools and techniques for analysing the impact of information security
PhD ThesisThe discipline of information security is employed by organisations to protect the confidentiality,
integrity and availability of information, often communicated in the form of
information security policies. A policy expresses rules, constraints and procedures to guard
against adversarial threats and reduce risk by instigating desired and secure behaviour of
those people interacting with information legitimately. To keep aligned with a dynamic threat
landscape, evolving business requirements, regulation updates, and new technologies a policy
must undergo periodic review and change. Chief Information Security Officers (CISOs) are
the main decision makers on information security policies within an organisation. Making
informed policy modifications involves analysing and therefore predicting the impact of those
changes on the success rate of business processes often expressed as workflows. Security
brings an added burden to completing a workflow. Adding a new security constraint may
reduce success rate or even eliminate it if a workflow is always forced to terminate early. This
can increase the chances of employees bypassing or violating a security policy. Removing an
existing security constraint may increase success rate but may may also increase the risk to
security. A lack of suitably aimed impact analysis tools and methodologies for CISOs means
impact analysis is currently a somewhat manual and ambiguous procedure. Analysis can
be overwhelming, time consuming, error prone, and yield unclear results, especially when
workflows are complex, have a large workforce, and diverse security requirements. This
thesis considers the provision of tools and more formal techniques specific to CISOs to help
them analyse the impact modifying a security policy has on the success rate of a workflow.
More precisely, these tools and techniques have been designed to efficiently compare the
impact between two versions of a security policy applied to the same workflow, one before,
the other after a policy modification.
This work focuses on two specific types of security impact analysis. The first is quantitative
in nature, providing a measure of success rate for a security constrained workflow
which must be executed by employees who may be absent at runtime. This work considers
quantifying workflow resiliency which indicates a workflow’s expected success rate assuming
the availability of employees to be probabilistic. New aspects of quantitative resiliency are introduced in the form of workflow metrics, and risk management techniques to manage
workflows that must work with a resiliency below acceptable levels. Defining these risk
management techniques has led to exploring the reduction of resiliency computation time and
analysing resiliency in workflows with choice. The second area of focus is more qualitative,
in terms of facilitating analysis of how people are likely to behave in response to security
and how that behaviour can impact the success rate of a workflow at a task level. Large
amounts of information from disparate sources exists on human behavioural factors in a
security setting which can be aligned with security standards and structured within a single
ontology to form a knowledge base. Consultations with two CISOs have been conducted,
whose responses have driven the implementation of two new tools, one graphical, the other
Web-oriented allowing CISOs and human factors experts to record and incorporate their
knowledge directly within an ontology. The ontology can be used by CISOs to assess the
potential impact of changes made to a security policy and help devise behavioural controls
to manage that impact. The two consulted CISOs have also carried out an evaluation of the
Web-oriented tool.
vii