19 research outputs found
Approches formelles pour la modélisation et la vérification du contrôle d'accès et des contraintes temporelles dans les systèmes d'information
RÉSUMÉ
Nos travaux de recherche s’inscrivent dans un cadre qui vise à développer des approches formelles pour aider à concevoir des systèmes d’information avec un bon niveau de sûreté et de sécurité. Précisément, il s’agit de disposer d’approches pour vérifier qu’un système fonctionne correctement et qu’il implémente une politique de sécurité qui répond à ses besoins spécifiques en termes de confidentialité, d’intégrité et de disponibilité des données. Notre recherche s’est ainsi construite autour de la volonté de développer, valoriser et élargir l’utilisation des réseaux de Petri en tant qu’outil de modélisation et le model-checking en tant que technique de vérification. Notre principal objectif est d’exprimer la dimension temporelle de manière quantitative pour vérifier des propriétés temporelles telles que la disponibilité des données, la durée d’exécution des tâches, les deadlines, etc. Tout d’abord, nous proposons une extension du modèle TSCPN (Timed Secure Colored Petri Net), initialement présenté dans mon mémoire de maˆıtrise. Le modèle TSCPN permet de modéliser et de raisonner sur les droits d’accès aux données exprimés via une politique de contrôle d’accès mandataire, i.e. Modèle de Bell-LaPadula. Ensuite, nous investigons l’idée d’utiliser les réseaux de Petri colorés pour représenter les politiques de contrôle d’accès à base de rôles (Role Based Access Control - RBAC). Notre objectif est de fournir des guides précis pour aider à la spécification d’une politique RBAC cohérente et complète, appuyée par les réseaux de Petri colorés et l’outil CPNtools. Finalement, nous proposons d’enrichir la classe des réseaux de Petri temporels par une nouvelle extension qui permet d’exprimer plus d’un seul type de contraintes temporelles. Il s’agit du modèle TAWSPN (Timed Arc Petri net - Weak and Strong semantics). Notre but étant d’offrir une grande flexibilité dans la modélisation de systèmes temporisés complexes sans complexifier les méthodes d’analyse classiques. En effet, le modèle TAWSPN offre une technique de modelchecking, basée sur la construction de graphes des zones (Gardey et al., 2003), comparables à celles des autres extensions temporelles des réseaux de Petri.
----------ABSTRACT
Our research is integrated within a framework that aims to develop formal approaches to help in the design of information systems with a good level of safety and security. Specifically, these approaches have to verify that a system works correctly and that it implements a security policy that meets its specific needs in terms of data confidentiality, integrity and availability. Our research is thus built around the aim to develop, enhance and expand the use of Petri nets as a modeling tool and the Model-checking as a verification technique. Our main objective is to express the temporal dimension in order to check quantitative temporal properties such as data availability, task execution duration, deadlines, etc.
First, we propose an extension of the TSCPN (Timed Secure Colored Petri Net) model, originally presented in my master’s thesis. This model allows representing and reasoning about access rights, expressed via a mandatory access control policy, i.e. Bell-LaPadula model. In a second step, we investigate the idea of using colored Petri nets to represent role based access control policies (RBAC). Our goal is to provide specific guidelines to assist in the specification of a coherent and comprehensive RBAC, supported by colored Petri nets and CPNtools. Finally, we propose to enrich the class of time Petri nets by a new extension that allows to express more than one kind of time constraint, named TAWSPN (Timed-Arc Petri net Weak and Strong semantics). Our goal is to provide great flexibility in modeling complex systems without complicating the conventional methods of analysis. Indeed, the TAWSPN model offers a model-checking technique based on the construction of zone graphs (Gardey et al., 2003), comparable to those of other extensions of timed Petri net
Optimizing performance of workflow executions under authorization control
“Business processes or workflows are often used to
model enterprise or scientific applications. It has
received considerable attention to automate workflow
executions on computing resources. However, many
workflow scenarios still involve human activities and
consist of a mixture of human tasks and computing
tasks.
Human involvement introduces security and
authorization concerns, requiring restrictions on who
is allowed to perform which tasks at what time. Role-
Based Access Control (RBAC) is a popular authorization
mechanism. In RBAC, the authorization concepts such as
roles and permissions are defined, and various
authorization constraints are supported, including
separation of duty, temporal constraints, etc. Under
RBAC, users are assigned to certain roles, while the
roles are associated with prescribed permissions.
When we assess resource capacities, or evaluate the
performance of workflow executions on supporting
platforms, it is often assumed that when a task is
allocated to a resource, the resource will accept the
task and start the execution once a processor becomes available. However, when the authorization policies
are taken into account,” this assumption may not be
true and the situation becomes more complex. For
example, when a task arrives, a valid and activated
role has to be assigned to a task before the task can
start execution. The deployed authorization
constraints may delay the workflow execution due to
the roles’ availability, or other restrictions on the
role assignments, which will consequently have
negative impact on application performance.
When the authorization constraints are present to
restrict the workflow executions, it entails new
research issues that have not been studied yet in
conventional workflow management. This thesis aims to
investigate these new research issues.
First, it is important to know whether a feasible
authorization solution can be found to enable the
executions of all tasks in a workflow, i.e., check the
feasibility of the deployed authorization constraints.
This thesis studies the issue of the feasibility
checking and models the feasibility checking problem
as a constraints satisfaction problem.
Second, it is useful to know when the performance of
workflow executions will not be affected by the given
authorization constraints. This thesis proposes the
methods to determine the time durations when the given
authorization constraints do not have impact.
Third, when the authorization constraints do have
the performance impact, how can we quantitatively
analyse and determine the impact? When there are multiple choices to assign the roles to the tasks,
will different choices lead to the different
performance impact? If so, can we find an optimal way
to conduct the task-role assignments so that the
performance impact is minimized? This thesis proposes
the method to analyze the delay caused by the
authorization constraints if the workflow arrives
beyond the non-impact time duration calculated above.
Through the analysis of the delay, we realize that the
authorization method, i.e., the method to select the
roles to assign to the tasks affects the length of the
delay caused by the authorization constraints. Based
on this finding, we propose an optimal authorization
method, called the Global Authorization Aware (GAA)
method.
Fourth, a key reason why authorization constraints
may have impact on performance is because the
authorization control directs the tasks to some
particular roles. Then how to determine the level of
workload directed to each role given a set of
authorization constraints? This thesis conducts the
theoretical analysis about how the authorization
constraints direct the workload to the roles, and
proposes the methods to calculate the arriving rate of
the requests directed to each role under the role,
temporal and cardinality constraints.
Finally, the amount of resources allocated to
support each individual role may have impact on the
execution performance of the workflows. Therefore, it
is desired to develop the strategies to determine the
adequate amount of resources when the authorization
control is present in the system. This thesis presents the methods to allocate the appropriate quantity for
resources, including both human resources and
computing resources. Different features of human
resources and computing resources are taken into
account. For human resources, the objective is to
maximize the performance subject to the budgets to
hire the human resources, while for computing
resources, the strategy aims to allocate adequate
amount of computing resources to meet the QoS
requirements
Evaluating Resilience of Cyber-Physical-Social Systems
Nowadays, protecting the network is not the only security concern. Still, in cyber security,
websites and servers are becoming more popular as targets due to the ease with which
they can be accessed when compared to communication networks. Another threat in
cyber physical social systems with human interactions is that they can be attacked and
manipulated not only by technical hacking through networks, but also by manipulating
people and stealing users’ credentials. Therefore, systems should be evaluated beyond cy-
ber security, which means measuring their resilience as a piece of evidence that a system
works properly under cyber-attacks or incidents. In that way, cyber resilience is increas-
ingly discussed and described as the capacity of a system to maintain state awareness for
detecting cyber-attacks. All the tasks for making a system resilient should proactively
maintain a safe level of operational normalcy through rapid system reconfiguration to
detect attacks that would impact system performance. In this work, we broadly studied
a new paradigm of cyber physical social systems and defined a uniform definition of it.
To overcome the complexity of evaluating cyber resilience, especially in these inhomo-
geneous systems, we proposed a framework including applying Attack Tree refinements
and Hierarchical Timed Coloured Petri Nets to model intruder and defender behaviors
and evaluate the impact of each action on the behavior and performance of the system.Hoje em dia, proteger a rede não é a única preocupação de segurança. Ainda assim, na
segurança cibernética, sites e servidores estão se tornando mais populares como alvos
devido à facilidade com que podem ser acessados quando comparados às redes de comu-
nicação. Outra ameaça em sistemas sociais ciberfisicos com interações humanas é que eles
podem ser atacados e manipulados não apenas por hackers técnicos através de redes, mas
também pela manipulação de pessoas e roubo de credenciais de utilizadores. Portanto, os
sistemas devem ser avaliados para além da segurança cibernética, o que significa medir
sua resiliência como uma evidência de que um sistema funciona adequadamente sob
ataques ou incidentes cibernéticos. Dessa forma, a resiliência cibernética é cada vez mais
discutida e descrita como a capacidade de um sistema manter a consciência do estado para
detectar ataques cibernéticos. Todas as tarefas para tornar um sistema resiliente devem
manter proativamente um nível seguro de normalidade operacional por meio da reconfi-
guração rápida do sistema para detectar ataques que afetariam o desempenho do sistema.
Neste trabalho, um novo paradigma de sistemas sociais ciberfisicos é amplamente estu-
dado e uma definição uniforme é proposta. Para superar a complexidade de avaliar a
resiliência cibernética, especialmente nesses sistemas não homogéneos, é proposta uma
estrutura que inclui a aplicação de refinamentos de Árvores de Ataque e Redes de Petri
Coloridas Temporizadas Hierárquicas para modelar comportamentos de invasores e de-
fensores e avaliar o impacto de cada ação no comportamento e desempenho do sistema
SYSTEMATIC POLICY ANALYSIS AND MANAGEMENT
Determining whether a given policy meets a site’s high-level security goals has been a challenging task, due to the low-level nature and complexity of the policy language, various security requirements and the multiple policy violation patterns. In this dissertation, we outline a systematic policy analysis and management approach that enables system administrators to easily identify and resolve various policy violations. Our approach incorporates a domain-based isolation model to address the security requirements and visualization mechanisms to provide the policy administrator with intuitive cognitive sense about the policy analysis and policy violations. Based on the domain-based isolation model and the policy visualization mechanisms, we develop a visualization-based policy analysis and management framework. We also describe our implementation of a visualization-based policy analysis and management tool that provides the functionalities discussed in our framework. In addition, a user study is performed and the result is included as part of our evaluation efforts for the prototype system.
One important application of our policy analysis and management is to support remote attestation. Remote attestation is an important mechanism to provide the trustworthiness proof of a computing system by verifying its integrity. In our work, we propose a remote attestation framework, called Dynamic Remote Attestation Framework and Tactics (DR@FT), for efficiently attesting a target system based on our extended visualization-based policy analysis and management approach. In addition, we adopt the proposed visualization-based policy violation expression to represent integrity violations with a ranked violation graph, which supports intuitive reasoning of attestation results. We also describe our experiments and performance evaluation
Sixth Workshop and Tutorial on Practical Use of Coloured Petri Nets and the CPN Tools Aarhus, Denmark, October 24-26, 2005
This booklet contains the proceedings of the Sixth Workshop on Practical Use of Coloured Petri Nets and the CPN Tools, October 24-26, 2005. The workshop is organised by the CPN group at the Department of Computer Science, University of Aarhus, Denmark. The papers are also available in electronic form via the web pages: http://www.daimi.au.dk/CPnets/workshop0
Workflow Behavior Auditing for Mission Centric Collaboration
Successful mission-centric collaboration depends on situational awareness in an increasingly complex mission environment. To support timely and reliable high level mission decisions, auditing tools need real-time data for effective assessment and optimization of mission behaviors. In the context of a battle rhythm, mission health can be measured from workflow generated activities. Though battle rhythm collaboration is dynamic and global, a potential enabling technology for workflow behavior auditing exists in process mining.
However, process mining is not adequate to provide mission situational awareness in the battle rhythm environment since event logs may contain dynamic mission states, noise and timestamp inaccuracy. Therefore, we address a few key near-term issues. In sequences of activities parsed from network traffic streams, we identify mission state changes in the workflow shift detection algorithm. In segments of unstructured event logs that contain both noise and relevant workflow data, we extract and rank workflow instances for the process analyst. When confronted with timestamp inaccuracy in event logs from semi automated, distributed workflows, we develop the flower chain network and discovery algorithm to improve behavioral conformance. For long term adoption of process mining in mission centric collaboration, we develop and demonstrate an experimental framework for logging uncertainty testing. We show that it is highly feasible to employ process mining techniques in environments with dynamic mission states and logging uncertainty.
Future workflow behavior auditing technology will benefit from continued algorithmic development, new data sources and system prototypes to propel next generation mission situational awareness, giving commanders new tools to assess and optimize workflows, computer systems and missions in the battle space environment
Obstructions in Security-Aware Business Processes
This Open Access book explores the dilemma-like stalemate between security and regulatory compliance in business processes on the one hand and business continuity and governance on the other. The growing number of regulations, e.g., on information security, data protection, or privacy, implemented in increasingly digitized businesses can have an obstructive effect on the automated execution of business processes. Such security-related obstructions can particularly occur when an access control-based implementation of regulations blocks the execution of business processes. By handling obstructions, security in business processes is supposed to be improved. For this, the book presents a framework that allows the comprehensive analysis, detection, and handling of obstructions in a security-sensitive way. Thereby, methods based on common organizational security policies, process models, and logs are proposed. The Petri net-based modeling and related semantic and language-based research, as well as the analysis of event data and machine learning methods finally lead to the development of algorithms and experiments that can detect and resolve obstructions and are reproducible with the provided software
Une modélisation formelle orientée flux de données pour l'analyse de configuration de sécurité réseau
La mise en œuvre d’une politique de sécurité réseau consiste en la configuration de mécanismes de sécurité hétérogènes (passerelles IPsec, listes de contrôle d’accès sur les routeurs, pare-feux à états, proxys, etc.) disponibles dans un environnement réseau donné. La complexité de cette tâche réside dans le nombre, la nature, et l’interdépendance des mécanismes à considérer. Si différents travaux de recherche ont tenté de fournir des outils d’analyse, la réalisation de cette tâche repose aujourd’hui encore sur l’expérience et la connaissance des administrateurs sécurité qui doivent maîtriser tous ces paramètres. Dans cette thèse nous proposons une solution pour faciliter le travail des administrateurs réseau. En effet, nombre d’inconsistances viennent de l’incompatibilité de règles de politiques, de l’incompatibilité de mécanismes mis en œuvre successivement au sein des équipements traversés. Une théorie formelle générique qui permet de raisonner sur les flux de données réseau est manquante. Dans cette optique, nous présentons trois résultats complémentaires : 1-un modèle formel orienté flux de données pour l’analyse de politiques de sécurité réseau afin de détecter les problèmes de consistance entre différents mécanismes de sécurité sur des équipements différents jouant un rôle à différents niveaux dans les couches ISO. Dans ce modèle, nous avons modélisé un flux d’information par un triplet contenant la liste des protocoles de communication dont le flux résulte, la liste des attributs dont l’authentification est garantie, et la liste des attributs dont la confidentialité est garantie. 2-un formalisme indépendant de la technologie basé sur les flux de données pour la représentation des mécanismes de sécurité ; nous avons spécifié formellement la capacité et la configuration des mécanismes de sécurité devant être mis en œuvre en construisant une abstraction des flux physiques de blocs de données. Nous avons proposé une solution qui peut répondre aux exigences de sécurité et qui peut aider à analyser les conflits liés au déploiement de plusieurs technologies installées sur plusieurs équipements 3-afin d’évaluer à la fois la capacité d’expression et d’analyse du langage de modélisation, nous avons utilisé les réseaux de Petri colorés pour spécifier formellement notre langage. L’objectif de nos recherches vise l’intérêt et la mise à disposition d’un langage de modélisation pour décrire et valider les architectures solutions répondant à des exigences de sécurité réseau. Des simulations appliquées à des cas particuliers, comme le protocole IPsec, NA(P)T et Netfilter/iptables, complètent notre démarche. Néanmoins, l’analyse des conflits de sécurité se fait actuellement par simulation et de manière non exhaustive. Nos travaux futurs viseront à aider/automatiser l’analyse en permettant aux intéressés de définir les propriétés en logique temporelle par exemple qui seront contrôlées automatiquement.The implementation of network security policy requires the configuration of heterogeneous and complex security mechanisms in a given network environment (IPsec gateways, ACLs on routers, stateful firewalls, proxies, etc.). The complexity of this task resides in the number, the nature, and the interdependence of these mechanisms. Although several researchers have proposed different analysis tools, achieving this task still requires experienced and proficient security administrators who can handle all these parameters. In this thesis, we propose a solution to facilitate the work of network administrators. Indeed, many inconsistencies come from the incompatibility of policy rules and/or incompatible mechanisms implemented in devices through which packets travel. A generic formal theory that allows reasoning about network data flows and security mechanisms is missing. With this end in mind, we develop in this thesis three results: •A formal data-flow oriented model to analyze and detect network security conflicts between different mechanisms playing a role at various ISO levels. We modeled a flow of information by a triplet containing the list of communication protocols (i.e., encapsulation), the list of authenticated attributes and the list of encrypted attributes, •A generic attribute-based model for network security mechanisms representation and configuration. We have formally specified the capacity and configuration of security mechanisms by constructing an abstraction of physical flows of data blocks. We have proposed a solution that can satisfy security requirements and can help conflicts analysis in the deployment of technologies installed on different devices, •To evaluate both the ability of expression and analysis power of the modeling language. We have used CPN Tools [Jensen et Kristensen 2009] and [CPN tools] to formally specify our language. The goal of our research is to propose a modeling language for describing and validating architectural solutions that meet network security requirements. Simulations are applied to specific scenarios, such as the IPsec, NA(P)T and Netfilter/iptables protocols, to validate our approach. Nevertheless, the analysis of security conflicts is currently done by simulation and in a non-exhaustive manner. Our future work will aim to assist/automate the analysis by allowing the definition of properties in temporal logic for instance which will be automatically controlled
Declarative techniques for modeling and mining business processes..
Organisaties worden vandaag de dag geconfronteerd met een schijnbare tegenstelling. Hoewel ze aan de ene kant veel geld geïnvesteerd hebben in informatiesystemen die hun bedrijfsprocessen automatiseren, lijken ze hierdoor minder in staat om een goed inzicht te krijgen in het verloop van deze processen. Een gebrekkig inzicht in de bedrijfsprocessen bedreigt hun flexibiliteit en conformiteit. Flexibiliteit is belangrijk, omdat organisaties door continu wijzigende marktomstandigheden gedwongen worden hun bedrijfsprocessen snel en soepel aan te passen. Daarnaast moeten organisaties ook kunnen garanderen dan hun bedrijfsvoering conform is aan de wetten, richtlijnen, en normen die hun opgelegd worden. Schandalen zoals de recent aan het licht gekomen fraude bij de Franse bank Société Générale toont het belang aan van conformiteit en flexibiliteit. Door het afleveren van valse bewijsstukken en het omzeilen van vaste controlemomenten, kon één effectenhandelaar een risicoloze arbitragehandel op prijsverschillen in futures omtoveren tot een risicovolle, speculatieve handel in deze financiële derivaten. De niet-ingedekte, niet-geautoriseerde posities bleven lange tijd verborgen door een gebrekkige interne controle, en tekortkomingen in de IT beveiliging en toegangscontrole. Om deze fraude in de toekomst te voorkomen, is het in de eerste plaats noodzakelijk om inzicht te verkrijgen in de operationele processen van de bank en de hieraan gerelateerde controleprocessen. In deze tekst behandelen we twee benaderingen die gebruikt kunnen worden om het inzicht in de bedrijfsprocessen te verhogen: procesmodellering en procesontginning. In het onderzoek is getracht technieken te ontwikkelen voor procesmodellering en procesontginning die declaratief zijn. Procesmodellering process modeling is de manuele constructie van een formeel model dat een relevant aspect van een bedrijfsproces beschrijft op basis van informatie die grotendeels verworven is uit interviews. Procesmodellen moeten adequate informatie te verschaffen over de bedrijfsprocessen om zinvol te kunnen worden gebruikt bij hun ontwerp, implementatie, uitvoering, en analyse. De uitdaging bestaat erin om nieuwe talen voor procesmodellering te ontwikkelen die adequate informatie verschaffen om deze doelstelling realiseren. Declaratieve procestalen maken de informatie omtrent bedrijfsbekommernissen expliciet. We karakteriseren en motiveren declaratieve procestalen, en nemen we een aantal bestaande technieken onder de loep. Voorts introduceren we een veralgemenend raamwerk voor declaratieve procesmodellering waarbinnen bestaande procestalen gepositioneerd kunnen worden. Dit raamwerk heet het EM-BrA�CE raamwerk, en staat voor `Enterprise Modeling using Business Rules, Agents, Activities, Concepts and Events'. Het bestaat uit een formele ontolgie en een formeel uitvoeringsmodel. Dit raamwerk legt de ontologische basis voor de talen en technieken die verder in het doctoraat ontwikkeld worden. Procesontginning process mining is de automatische constructie van een procesmodel op basis van de zogenaamde event logs uit informatiesystemen. Vandaag de dag worden heel wat processen door informatiesystemen in event logs geregistreerd. In event logs vindt men in chronologische volgorde terug wie, wanneer, welke activiteit verricht heeft. De analyse van event logs kan een accuraat beeld opleveren van wat er zich in werkelijkheid afspeelt binnen een organisatie. Om bruikbaar te zijn, moeten de ontgonnen procesmodellen voldoen aan criteria zoals accuraatheid, verstaanbaarheid, en justifieerbaarheid. Bestaande technieken voor procesontginning focussen vooral op het eerste criterium: accuraatheid. Declaratieve technieken voor procesontginning richten zich ook op de verstaanbaarheid en justifieerbaarheid van de ontgonnen modellen. Declaratieve technieken voor procesontginning zijn meer verstaanbaar omdat ze pogen procesmodellen voor te stellen aan de hand van declaratieve voorstellingsvormen. Daarenboven verhogen declaratieve technieken de justifieerbaarheid van de ontgonnen modellen. Dit komt omdat deze technieken toelaten de apriori kennis, inductieve bias, en taal bias van een leeralgoritme in te stellen. Inductief logisch programmeren (ILP) is een leertechniek die inherent declaratief is. In de tekst tonen we hoe proces mining voorgesteld kan worden als een ILP classificatieprobleem, dat de logische voorwaarden leert waaronder gebeurtenis plaats vindt (positief event) of niet plaatsvindt (een negatief event). Vele event logs bevatten van nature geen negatieve events die aangeven dat een bepaalde activiteit niet kon plaatsvinden. Om aan dit probleem tegemoet te komen, beschrijven we een techniek om artificiële negatieve events te genereren, genaamd AGNEs (process discovery by Artificially Generated Negative Events). De generatie van artificiële negatieve events komt neer op een configureerbare inductieve bias. De AGNEs techniek is geïmplementeerd als een mining plugin in het ProM raamwerk. Door process discovery voor te stellen als een eerste-orde classificatieprobleem op event logs met artificiële negatieve events, kunnen de traditionele metrieken voor het kwantificeren van precisie (precision) en volledigheid (recall) toegepast worden voor het kwantificeren van de precisie en volledigheid van een procesmodel ten opzicht van een event log. In de tekst stellen we twee nieuwe metrieken voor. Deze nieuwe metrieken, in combinatie met bestaande metrieken, werden gebruikt voor een uitgebreide evaluatie van de AGNEs techniek voor process discovery in zowel een experimentele als een praktijkopstelling.
Proceedings of the 3rd International Workshop on Formal Aspects in Security and Trust (FAST2005)
The present report contains the pre-proceedings of the third international Workshop on Formal Aspects in Security and Trust (FAST2005), held in Newcastle upon Tyne, 18-19 July 2005. FAST is an event affliated with the Formal Methods 2005 Congress (FM05). The third international Workshop on Formal Aspects in Security and Trust (FAST2005) aims at continuing the successful effort of the previous two FAST workshop editions for fostering the cooperation among researchers in the areas of security and trust. The new challenges offered by the so-called ambient intelligence space, as a future paradigm in the information society, demand for a coherent and rigorous framework of concepts, tools and methodologies to provide user\u27s trust&confidence on the underlying communication/interaction infrastructure. It is necessary to address issues relating to both guaranteeing security of the infrastructure and the perception of the infrastructure being secure. In addition, user confidence on what is happening must be enhanced by developing trust models effective but also easily comprehensible and manageable by users