240 research outputs found
Optimizing performance of workflow executions under authorization control
âBusiness processes or workflows are often used to
model enterprise or scientific applications. It has
received considerable attention to automate workflow
executions on computing resources. However, many
workflow scenarios still involve human activities and
consist of a mixture of human tasks and computing
tasks.
Human involvement introduces security and
authorization concerns, requiring restrictions on who
is allowed to perform which tasks at what time. Role-
Based Access Control (RBAC) is a popular authorization
mechanism. In RBAC, the authorization concepts such as
roles and permissions are defined, and various
authorization constraints are supported, including
separation of duty, temporal constraints, etc. Under
RBAC, users are assigned to certain roles, while the
roles are associated with prescribed permissions.
When we assess resource capacities, or evaluate the
performance of workflow executions on supporting
platforms, it is often assumed that when a task is
allocated to a resource, the resource will accept the
task and start the execution once a processor becomes available. However, when the authorization policies
are taken into account,â this assumption may not be
true and the situation becomes more complex. For
example, when a task arrives, a valid and activated
role has to be assigned to a task before the task can
start execution. The deployed authorization
constraints may delay the workflow execution due to
the rolesâ availability, or other restrictions on the
role assignments, which will consequently have
negative impact on application performance.
When the authorization constraints are present to
restrict the workflow executions, it entails new
research issues that have not been studied yet in
conventional workflow management. This thesis aims to
investigate these new research issues.
First, it is important to know whether a feasible
authorization solution can be found to enable the
executions of all tasks in a workflow, i.e., check the
feasibility of the deployed authorization constraints.
This thesis studies the issue of the feasibility
checking and models the feasibility checking problem
as a constraints satisfaction problem.
Second, it is useful to know when the performance of
workflow executions will not be affected by the given
authorization constraints. This thesis proposes the
methods to determine the time durations when the given
authorization constraints do not have impact.
Third, when the authorization constraints do have
the performance impact, how can we quantitatively
analyse and determine the impact? When there are multiple choices to assign the roles to the tasks,
will different choices lead to the different
performance impact? If so, can we find an optimal way
to conduct the task-role assignments so that the
performance impact is minimized? This thesis proposes
the method to analyze the delay caused by the
authorization constraints if the workflow arrives
beyond the non-impact time duration calculated above.
Through the analysis of the delay, we realize that the
authorization method, i.e., the method to select the
roles to assign to the tasks affects the length of the
delay caused by the authorization constraints. Based
on this finding, we propose an optimal authorization
method, called the Global Authorization Aware (GAA)
method.
Fourth, a key reason why authorization constraints
may have impact on performance is because the
authorization control directs the tasks to some
particular roles. Then how to determine the level of
workload directed to each role given a set of
authorization constraints? This thesis conducts the
theoretical analysis about how the authorization
constraints direct the workload to the roles, and
proposes the methods to calculate the arriving rate of
the requests directed to each role under the role,
temporal and cardinality constraints.
Finally, the amount of resources allocated to
support each individual role may have impact on the
execution performance of the workflows. Therefore, it
is desired to develop the strategies to determine the
adequate amount of resources when the authorization
control is present in the system. This thesis presents the methods to allocate the appropriate quantity for
resources, including both human resources and
computing resources. Different features of human
resources and computing resources are taken into
account. For human resources, the objective is to
maximize the performance subject to the budgets to
hire the human resources, while for computing
resources, the strategy aims to allocate adequate
amount of computing resources to meet the QoS
requirements
Generic business process modelling framework for quantitative evaluation
PhD ThesisBusiness processes are the backbone of organisations used to automate
and increase the efficiency and effectiveness of their services and prod-
ucts. The rapid growth of the Internet and other Web based technologies
has sparked competition between organisations in attempting to provide
a faster, cheaper and smarter environment for customers. In response
to these requirements, organisations are examining how their business
processes may be evaluated so as to improve business performance.
This thesis proposes a generic framework to expand the applicability
of various quantitative evaluation to a large class of business processes.
The framework introduces a novel engineering methodology that defines
a modelling formalism to represent business processes that can be solved
for a set of performance and optimisation algorithms. The methodology
allows various types of algorithms used in model-based business pro-
cess improvement and optimisation to be plugged in a single modelling
formalism. As a part of the framework, a generic modelling formalism
(MWF-wR) is developed to represent business processes so as to allow
quantitative evaluation and to select the parameters for the associated
performance evaluation and optimisation.
The generic framework is designed and implemented by developing soft-
ware support tools using Java as object oriented programming language
combining three main modules: (i) a business process specification mod-
ule to define the components of the business process model, (ii) a stochas-
tic Petri net module to map the business process model to a stochastic
Petri net, and (iii) an algorithms module to solve the models for various
performance optimisation objectives. Furthermore, a literature survey
of different aspects of business processes including modelling and analy-
sis techniques provides an overview of the current state of research and
highlights gaps in business process modelling and performance analy-
sis. Finally, experiments are introduced to investigate the validity of the
presented approach
Perfomance Analysis and Resource Optimisation of Critical Systems Modelled by Petri Nets
Un sistema crĂtico debe cumplir con su misiĂłn a pesar de la presencia de problemas de seguridad. Este tipo de sistemas se suele desplegar en entornos heterogĂ©neos, donde pueden ser objeto de intentos de intrusiĂłn, robo de informaciĂłn confidencial u otro tipo de ataques. Los sistemas, en general, tienen que ser rediseñados despuĂ©s de que ocurra un incidente de seguridad, lo que puede conducir a consecuencias graves, como el enorme costo de reimplementar o reprogramar todo el sistema, asĂ como las posibles pĂ©rdidas econĂłmicas. AsĂ, la seguridad ha de ser concebida como una parte integral del desarrollo de sistemas y como una necesidad singular de lo que el sistema debe realizar (es decir, un requisito no funcional del sistema). AsĂ pues, al diseñar sistemas crĂticos es fundamental estudiar los ataques que se pueden producir y planificar cĂłmo reaccionar frente a ellos, con el fin de mantener el cumplimiento de requerimientos funcionales y no funcionales del sistema. A pesar de que los problemas de seguridad se consideren, tambiĂ©n es necesario tener en cuenta los costes incurridos para garantizar un determinado nivel de seguridad en sistemas crĂticos. De hecho, los costes de seguridad puede ser un factor muy relevante ya que puede abarcar diferentes dimensiones, como el presupuesto, el rendimiento y la fiabilidad. Muchos de estos sistemas crĂticos que incorporan tĂ©cnicas de tolerancia a fallos (sistemas FT) para hacer frente a las cuestiones de seguridad son sistemas complejos, que utilizan recursos que pueden estar comprometidos (es decir, pueden fallar) por la activaciĂłn de los fallos y/o errores provocados por posibles ataques. Estos sistemas pueden ser modelados como sistemas de eventos discretos donde los recursos son compartidos, tambiĂ©n llamados sistemas de asignaciĂłn de recursos. Esta tesis se centra en los sistemas FT con recursos compartidos modelados mediante redes de Petri (Petri nets, PN). Estos sistemas son generalmente tan grandes que el cĂĄlculo exacto de su rendimiento se convierte en una tarea de cĂĄlculo muy compleja, debido al problema de la explosiĂłn del espacio de estados. Como resultado de ello, una tarea que requiere una exploraciĂłn exhaustiva en el espacio de estados es incomputable (en un plazo prudencial) para sistemas grandes. Las principales aportaciones de esta tesis son tres. Primero, se ofrecen diferentes modelos, usando el Lenguaje Unificado de Modelado (Unified Modelling Language, UML) y las redes de Petri, que ayudan a incorporar las cuestiones de seguridad y tolerancia a fallos en primer plano durante la fase de diseño de los sistemas, permitiendo asĂ, por ejemplo, el anĂĄlisis del compromiso entre seguridad y rendimiento. En segundo lugar, se proporcionan varios algoritmos para calcular el rendimiento (tambiĂ©n bajo condiciones de fallo) mediante el cĂĄlculo de cotas de rendimiento superiores, evitando asĂ el problema de la explosiĂłn del espacio de estados. Por Ășltimo, se proporcionan algoritmos para calcular cĂłmo compensar la degradaciĂłn de rendimiento que se produce ante una situaciĂłn inesperada en un sistema con tolerancia a fallos
A MULTI-FUNCTIONAL PROVENANCE ARCHITECTURE: CHALLENGES AND SOLUTIONS
In service-oriented environments, services are put together in the form of a workflow with the aim of distributed problem solving.
Capturing the execution details of the services' transformations is a significant advantage of using workflows. These execution details, referred to as provenance information, are usually traced automatically and stored in provenance stores. Provenance data contains the data recorded by a workflow engine during a workflow execution. It identifies what data is passed between services, which services are involved, and how results are eventually generated for particular sets of input values.
Provenance information is of great importance and has found its way through areas in computer science such as: Bioinformatics, database, social, sensor networks, etc.
Current exploitation and application of provenance data is very limited as provenance systems started being developed for specific applications. Thus, applying learning and knowledge discovery methods to provenance data can provide rich and useful information on workflows and services.
Therefore, in this work, the challenges with workflows and services are studied to discover the possibilities and benefits of providing solutions by using provenance data.
A multifunctional architecture is presented which addresses the workflow and service issues by exploiting provenance data. These challenges include workflow composition, abstract workflow selection, refinement, evaluation, and graph model extraction. The specific contribution of the proposed architecture is its novelty in providing a basis for taking advantage of the previous execution details of services and workflows along with artificial intelligence and knowledge management techniques to resolve the major challenges regarding workflows. The presented architecture is application-independent and could be deployed in any area.
The requirements for such an architecture along with its building components are discussed. Furthermore, the responsibility of the components, related works and the implementation details of the architecture along with each component are presented
(I) A Declarative Framework for ERP Systems(II) Reactors: A Data-Driven Programming Model for Distributed Applications
To those who can be swayed by argument and those who know they do not have all the answers This dissertation is a collection of six adapted research papers pertaining to two areas of research. (I) A Declarative Framework for ERP Systems: âą POETS: Process-Oriented Event-driven Transaction Systems. The paper describes an ontological analysis of a small segment of the enterprise domain, namely the general ledger and accounts receivable. The result is an event-based approach to designing ERP systems and an abstract-level sketch of the architecture. âą Compositional Specification of Commercial Contracts. The paper de-scribes the design, multiple semantics, and use of a domain-specific lan-guage (DSL) for modeling commercial contracts. âą SMAWL: A SMAll Workflow Language Based on CCS. The paper show
Recommended from our members
Intelligent monitoring of business processes using case-based reasoning
The work in this thesis presents an approach towards the effective monitoring of business processes using Case-Based Reasoning (CBR). The rationale behind this research was that business processes constitute a fundamental concept of the modern world and there is a constantly emerging need for their efficient control. They can be efficiently represented but not necessarily monitored and diagnosed effectively via an appropriate platform.
Motivated by the above observation this research pursued to which extent there can be efficient monitoring, diagnosis and explanation of the workflows. Workflows and their effective representation in terms of CBR were investigated as well as how similarity measures among them could be established appropriately. The monitoring results and their following explanation to users were questioned as well as which should be an appropriate software architecture to allow monitoring of workflow executions.
Throughout the progress of this research, several sets of experiments have been conducted using existing enterprise systems which are coordinated via a predefined workflow business process. Past data produced over several years have been used for the needs of the conducted experiments. Based on those the necessary knowledge repositories were built and used afterwards in order to evaluate the suggesting approach towards the effective monitoring and diagnosis of business processes.
The produced results show to which extent a business process can be monitored and diagnosed effectively. The results also provide hints on possible changes that would maximize the accuracy of the actual monitoring, diagnosis and explanation. Moreover the presented approach can be generalised and expanded further to enterprise systems that have as common characteristics a possible workflow representation and the presence of uncertainty.
Further work motivated by this thesis could investigate how the knowledge acquisition can be transferred over workflow systems and be of benefit to large-scale multidimensional enterprises. Additionally the temporal uncertainty could be investigated further, in an attempt to address it while reasoning. Finally the provenance of cases and their solutions could be explored further, identifying correlations with the process of reasoning
VĂ©rification efficace de systĂšmes Ă compteurs Ă l'aide de relaxations
Abstract : Counter systems are popular models used to reason about systems in various fields such as the analysis of concurrent or distributed programs and the discovery and verification of business processes. We study well-established problems on various classes of counter systems. This thesis focusses on three particular systems, namely Petri nets, which are a type of model for discrete systems with concurrent and sequential events, workflow nets, which form a subclass of Petri nets that is suited for modelling and reasoning about business processes, and continuous one-counter automata, a novel model that combines continuous semantics with one-counter automata. For Petri nets, we focus on reachability and coverability properties. We utilize directed search algorithms, using relaxations of Petri nets as heuristics, to obtain novel semi-decision algorithms for reachability and coverability, and positively evaluate a prototype implementation. For workflow nets, we focus on the problem of soundness, a well-established correctness notion for such nets. We precisely characterize the previously widely-open complexity of three variants of soundness. Based on our insights, we develop techniques to verify soundness in practice, based on reachability relaxation of Petri nets. Lastly, we introduce the novel model of continuous one-counter automata. This model is a natural variant of one-counter automata, which allows reasoning in a hybrid manner combining continuous and discrete elements. We characterize the exact complexity of the reachability problem in several variants of the model.Les systÚmes à compteurs sont des modÚles utilisés afin de raisonner sur les systÚmes
de divers domaines tels lâanalyse de programmes concurrents ou distribuĂ©s, et
la dĂ©couverte et la vĂ©rification de systĂšmes dâaffaires. Nous Ă©tudions des problĂšmes
bien établis de différentes classes de systÚmes à compteurs. Cette thÚse se penche sur
trois systĂšmes particuliers : les rĂ©seaux de Petri, qui sont un type de modĂšle pour les systĂšmes discrets Ă
événements concurrents et séquentiels ; les « réseaux de processus », qui forment une sous-classe des réseaux de Petri
adaptĂ©e Ă la modĂ©lisation et au raisonnement des processus dâaffaires ; les automates continus Ă un compteur, un nouveau modĂšle qui combine une
sémantique continue à celles des automates à un compteur.
Pour les rĂ©seaux de Petri, nous nous concentrons sur les propriĂ©tĂ©s dâaccessibilitĂ©
et de couverture. Nous utilisons des algorithmes de parcours de graphes, avec
des relaxations de rĂ©seaux de Petri comme heuristiques, afin dâobtenir de nouveaux
algorithmes de semi-dĂ©cision pour lâaccessibilitĂ© et la couverture, et nous Ă©valuons
positivement un prototype.
Pour les «réseaux de processus», nous nous concentrons sur le problÚme de validité,
une notion de correction bien établie pour ces réseaux. Nous caractérisions
prĂ©cisĂ©ment la complexitĂ© calculatoire jusquâici largement ouverte de trois variantes
du problÚme de validité. En nous basant sur nos résultats, nous développons des techniques
pour vĂ©rifier la validitĂ© en pratique, Ă lâaide de relaxations dâaccessibilitĂ© dans
les rĂ©seaux de Petri. Enfin, nous introduisons le nouveau modĂšle dâautomates continus Ă un compteur. Ce modĂšle est une variante naturelle des automates Ă un compteur, qui permet de
raisonner de maniÚre hybride en combinant des éléments continus et discrets. Nous
caractĂ©risons la complexitĂ© exacte du problĂšme dâaccessibilitĂ© dans plusieurs variantes
du modĂšle
- âŠ