1,035 research outputs found
Recommended from our members
KWM: Knowledge-based Workflow Model for agile organization
The workflow management system (WFMS) in an agile organization should be highly adaptable to the frequent organizational changes. To increase the adaptability of contemporary WFMSs, a mechanism for managing changes within the organizational structure and changes in business rules needs to be reinforced. In this paper, a knowledge-based approach for workflow modeling is proposed, in which a workflow is defined as a set of business rules. Knowledge on the organizational structure and special workflow, such as role/actor mappings and complex routing rules, can be explicitly modeled in KWM (Knowledge-based Workflow Model).
Using knowledge representation scheme and dependency management facility, a change propagation mechanism is provided to adapt to the frequent changes in the organizational structure, business rules, and procedures
A Semantic Framework for Declarative and Procedural Knowledge
In any scientic domain, the full set of data and programs has reached an-ome status, i.e. it has grown massively. The original article on the Semantic Web describes the evolution of a Web of actionable information, i.e.\ud
information derived from data through a semantic theory for interpreting the symbols. In a Semantic Web, methodologies are studied for describing, managing and analyzing both resources (domain knowledge) and applications (operational knowledge) - without any restriction on what and where they\ud
are respectively suitable and available in the Web - as well as for realizing automatic and semantic-driven work\ud
ows of Web applications elaborating Web resources.\ud
This thesis attempts to provide a synthesis among Semantic Web technologies, Ontology Research, Knowledge and Work\ud
ow Management. Such a synthesis is represented by Resourceome, a Web-based framework consisting of two components which strictly interact with each other: an ontology-based and domain-independent knowledge manager system (Resourceome KMS) - relying on a knowledge model where resource and operational knowledge are contextualized in any domain - and a semantic-driven work ow editor, manager and agent-based execution system (Resourceome WMS).\ud
The Resourceome KMS and the Resourceome WMS are exploited in order to realize semantic-driven formulations of work\ud
ows, where activities are semantically linked to any involved resource. In the whole, combining the use of domain ontologies and work ow techniques, Resourceome provides a exible domain and operational knowledge organization, a powerful engine for semantic-driven work\ud
ow composition, and a distributed, automatic and\ud
transparent environment for work ow execution
Optimizing performance of workflow executions under authorization control
âBusiness processes or workflows are often used to
model enterprise or scientific applications. It has
received considerable attention to automate workflow
executions on computing resources. However, many
workflow scenarios still involve human activities and
consist of a mixture of human tasks and computing
tasks.
Human involvement introduces security and
authorization concerns, requiring restrictions on who
is allowed to perform which tasks at what time. Role-
Based Access Control (RBAC) is a popular authorization
mechanism. In RBAC, the authorization concepts such as
roles and permissions are defined, and various
authorization constraints are supported, including
separation of duty, temporal constraints, etc. Under
RBAC, users are assigned to certain roles, while the
roles are associated with prescribed permissions.
When we assess resource capacities, or evaluate the
performance of workflow executions on supporting
platforms, it is often assumed that when a task is
allocated to a resource, the resource will accept the
task and start the execution once a processor becomes available. However, when the authorization policies
are taken into account,â this assumption may not be
true and the situation becomes more complex. For
example, when a task arrives, a valid and activated
role has to be assigned to a task before the task can
start execution. The deployed authorization
constraints may delay the workflow execution due to
the rolesâ availability, or other restrictions on the
role assignments, which will consequently have
negative impact on application performance.
When the authorization constraints are present to
restrict the workflow executions, it entails new
research issues that have not been studied yet in
conventional workflow management. This thesis aims to
investigate these new research issues.
First, it is important to know whether a feasible
authorization solution can be found to enable the
executions of all tasks in a workflow, i.e., check the
feasibility of the deployed authorization constraints.
This thesis studies the issue of the feasibility
checking and models the feasibility checking problem
as a constraints satisfaction problem.
Second, it is useful to know when the performance of
workflow executions will not be affected by the given
authorization constraints. This thesis proposes the
methods to determine the time durations when the given
authorization constraints do not have impact.
Third, when the authorization constraints do have
the performance impact, how can we quantitatively
analyse and determine the impact? When there are multiple choices to assign the roles to the tasks,
will different choices lead to the different
performance impact? If so, can we find an optimal way
to conduct the task-role assignments so that the
performance impact is minimized? This thesis proposes
the method to analyze the delay caused by the
authorization constraints if the workflow arrives
beyond the non-impact time duration calculated above.
Through the analysis of the delay, we realize that the
authorization method, i.e., the method to select the
roles to assign to the tasks affects the length of the
delay caused by the authorization constraints. Based
on this finding, we propose an optimal authorization
method, called the Global Authorization Aware (GAA)
method.
Fourth, a key reason why authorization constraints
may have impact on performance is because the
authorization control directs the tasks to some
particular roles. Then how to determine the level of
workload directed to each role given a set of
authorization constraints? This thesis conducts the
theoretical analysis about how the authorization
constraints direct the workload to the roles, and
proposes the methods to calculate the arriving rate of
the requests directed to each role under the role,
temporal and cardinality constraints.
Finally, the amount of resources allocated to
support each individual role may have impact on the
execution performance of the workflows. Therefore, it
is desired to develop the strategies to determine the
adequate amount of resources when the authorization
control is present in the system. This thesis presents the methods to allocate the appropriate quantity for
resources, including both human resources and
computing resources. Different features of human
resources and computing resources are taken into
account. For human resources, the objective is to
maximize the performance subject to the budgets to
hire the human resources, while for computing
resources, the strategy aims to allocate adequate
amount of computing resources to meet the QoS
requirements
Perfomance Analysis and Resource Optimisation of Critical Systems Modelled by Petri Nets
Un sistema crĂtico debe cumplir con su misiĂłn a pesar de la presencia de problemas de seguridad. Este tipo de sistemas se suele desplegar en entornos heterogĂ©neos, donde pueden ser objeto de intentos de intrusiĂłn, robo de informaciĂłn confidencial u otro tipo de ataques. Los sistemas, en general, tienen que ser rediseñados despuĂ©s de que ocurra un incidente de seguridad, lo que puede conducir a consecuencias graves, como el enorme costo de reimplementar o reprogramar todo el sistema, asĂ como las posibles pĂ©rdidas econĂłmicas. AsĂ, la seguridad ha de ser concebida como una parte integral del desarrollo de sistemas y como una necesidad singular de lo que el sistema debe realizar (es decir, un requisito no funcional del sistema). AsĂ pues, al diseñar sistemas crĂticos es fundamental estudiar los ataques que se pueden producir y planificar cĂłmo reaccionar frente a ellos, con el fin de mantener el cumplimiento de requerimientos funcionales y no funcionales del sistema. A pesar de que los problemas de seguridad se consideren, tambiĂ©n es necesario tener en cuenta los costes incurridos para garantizar un determinado nivel de seguridad en sistemas crĂticos. De hecho, los costes de seguridad puede ser un factor muy relevante ya que puede abarcar diferentes dimensiones, como el presupuesto, el rendimiento y la fiabilidad. Muchos de estos sistemas crĂticos que incorporan tĂ©cnicas de tolerancia a fallos (sistemas FT) para hacer frente a las cuestiones de seguridad son sistemas complejos, que utilizan recursos que pueden estar comprometidos (es decir, pueden fallar) por la activaciĂłn de los fallos y/o errores provocados por posibles ataques. Estos sistemas pueden ser modelados como sistemas de eventos discretos donde los recursos son compartidos, tambiĂ©n llamados sistemas de asignaciĂłn de recursos. Esta tesis se centra en los sistemas FT con recursos compartidos modelados mediante redes de Petri (Petri nets, PN). Estos sistemas son generalmente tan grandes que el cĂĄlculo exacto de su rendimiento se convierte en una tarea de cĂĄlculo muy compleja, debido al problema de la explosiĂłn del espacio de estados. Como resultado de ello, una tarea que requiere una exploraciĂłn exhaustiva en el espacio de estados es incomputable (en un plazo prudencial) para sistemas grandes. Las principales aportaciones de esta tesis son tres. Primero, se ofrecen diferentes modelos, usando el Lenguaje Unificado de Modelado (Unified Modelling Language, UML) y las redes de Petri, que ayudan a incorporar las cuestiones de seguridad y tolerancia a fallos en primer plano durante la fase de diseño de los sistemas, permitiendo asĂ, por ejemplo, el anĂĄlisis del compromiso entre seguridad y rendimiento. En segundo lugar, se proporcionan varios algoritmos para calcular el rendimiento (tambiĂ©n bajo condiciones de fallo) mediante el cĂĄlculo de cotas de rendimiento superiores, evitando asĂ el problema de la explosiĂłn del espacio de estados. Por Ășltimo, se proporcionan algoritmos para calcular cĂłmo compensar la degradaciĂłn de rendimiento que se produce ante una situaciĂłn inesperada en un sistema con tolerancia a fallos
Workflow resource pattern modelling and visualization
Workflow patterns have been recognized as the theoretical basis to modeling recurring problems in workflow systems. A form of workflow patterns, known as the resource patterns, characterise the behaviour of resources in workflow systems. Despite the fact that many resource patterns have been discovered, people still preclude them from many workflow system implementations. One of reasons could be obscurityin the behaviour of and interaction between resources and a workflow management system. Thus, we provide a modelling and visualization approach for the resource patterns, enabling a resource behaviour modeller to intuitively see the specific resource patterns involved in the lifecycle of a workitem. We believe this research can be extended to benefit not only workflow modelling, but also other applications, such as model validation, human resource behaviour modelling, and workflow model visualization
Fine-Grained Workflow Interoperability in Life Sciences
In den vergangenen Jahrzehnten fĂŒhrten Fortschritte in den SchlĂŒsseltechnologien der Lebenswissenschaften zu einer exponentiellen Zunahme der zur VerfĂŒgung stehenden biologischen Daten. Um Ergebnisse zeitnah generieren zu können werden sowohl spezialisierte Rechensystem als auch ProgrammierfĂ€higkeiten benötigt: Desktopcomputer oder monolithische AnsĂ€tze sind weder in der Lage mit dem Wachstum der verfĂŒgbaren biologischen Daten noch mit der KomplexitĂ€t der Analysetechniken Schritt zu halten.
Workflows erlauben diesem Trend durch ParallelisierungsansĂ€tzen und verteilten Rechensystemen entgegenzuwirken. Ihre transparenten AblĂ€ufe, gegeben durch ihre klar definierten Strukturen, ebenso ihre Wiederholbarkeit, erfĂŒllen die Standards der Reproduzierbarkeit, welche an wissenschaftliche Methoden gestellt werden.
Eines der Ziele unserer Arbeit ist es Forschern beim Bedienen von Rechensystemen zu unterstĂŒtzen, ohne dass Programmierkenntnisse notwendig sind. DafĂŒr wurde eine Sammlung von Tools entwickelt, welche jedes Kommandozeilenprogramm in ein Workflowsystem integrieren kann. Ohne weitere Anpassungen kann unser Programm zwei weit verbreitete Workflowsysteme unterstĂŒtzen. Unser modularer Entwurf erlaubt zudem UnterstĂŒtzung fĂŒr weitere Workflowmaschinen hinzuzufĂŒgen.
Basierend auf der Bedeutung von frĂŒhen und robusten WorkflowentwĂŒrfen, haben wir auĂerdem eine wohl etablierte Desktopâbasierte Analyseplattform erweitert. Diese enthĂ€lt ĂŒber 2.000 Aufgaben, wobei jede als Baustein in einem Workflow fungiert. Die Plattform erlaubt einfache Entwicklung neuer Aufgaben und die Integration externer Kommandozeilenprogramme. In dieser Arbeit wurde ein Plugin zur Konvertierung entwickelt, welches nutzerfreundliche Mechanismen bereitstellt, um Workflows auf verteilten Hochleistungsrechensystemen auszufĂŒhrenâeine Aufgabe, die sonst technische Kenntnisse erfordert, die gewöhnlich nicht zum Anforderungsprofil eines Lebenswissenschaftlers gehören.
Unsere KonverterâErweiterung generiert quasi identische Versionen desselben Workflows, welche im Anschluss auf leistungsfĂ€higen Berechnungsressourcen ausgefĂŒhrt werden können. Infolgedessen werden nicht nur die Möglichkeiten von verteilten hochperformanten Rechensystemen sowie die Bequemlichkeit eines fĂŒr Desktopcomputer entwickelte Workflowsystems ausgenutzt, sondern zusĂ€tzlich werden BerechnungsbeschrĂ€nkungen von Desktopcomputern und die steile Lernkurve, die mit dem Workflowentwurf auf verteilten Systemen verbunden ist, umgangen. Unser KonverterâPlugin hat sofortige Anwendung fĂŒr Forscher. Wir zeigen dies in drei fĂŒr die Lebenswissenschaften relevanten Anwendungsbeispielen: Strukturelle Bioinformatik, Immuninformatik, und Metabolomik.Recent decades have witnessed an exponential increase of available biological data due to advances in key technologies for life sciences. Specialized computing resources and scripting skills are now required to deliver results in a timely fashion: desktop computers or monolithic approaches can no longer keep pace with neither the growth of available biological data nor the complexity of analysis techniques.
Workflows offer an accessible way to counter against this trend by facilitating parallelization and distribution of computations. Given their structured and repeatable nature, workflows also provide a transparent process to satisfy strict reproducibility standards required by the scientific method.
One of the goals of our work is to assist researchers in accessing computing resources without the need for programming or scripting skills. To this effect, we created a toolset able to integrate any command line tool into workflow systems. Out of the box, our toolset supports two widelyâused workflow systems, but our modular design allows for seamless additions in order to support further workflow engines.
Recognizing the importance of early and robust workflow design, we also extended a wellâestablished, desktopâbased analytics platform that contains more than two thousand tasks (each being a building block for a workflow), allows easy development of new tasks and is able to integrate external command line tools. We developed a converter plugâin that offers a userâfriendly mechanism to execute workflows on distributed highâperformance computing resourcesâan exercise that would otherwise require technical skills typically not associated with the average life scientist's profile.
Our converter extension generates virtually identical versions of the same workflows, which can then be executed on more capable computing resources. That is, not only did we leverage the capacity of distributed highâperformance resources and the conveniences of a workflow engine designed for personal computers but we also circumvented computing limitations of personal computers and the steep learning curve associated with creating workflows for distributed environments. Our converter extension has immediate applications for researchers and we showcase our results by means of three use cases relevant for life scientists: structural bioinformatics, immunoinformatics and metabolomics
scalable bioinformatics via workflow conversion
Background Reproducibility is one of the tenets of the scientific method.
Scientific experiments often comprise complex data flows, selection of
adequate parameters, and analysis and visualization of intermediate and end
results. Breaking down the complexity of such experiments into the joint
collaboration of small, repeatable, well defined tasks, each with well defined
inputs, parameters, and outputs, offers the immediate benefit of identifying
bottlenecks, pinpoint sections which could benefit from parallelization, among
others. Workflows rest upon the notion of splitting complex work into the
joint effort of several manageable tasks. There are several engines that give
users the ability to design and execute workflows. Each engine was created to
address certain problems of a specific community, therefore each one has its
advantages and shortcomings. Furthermore, not all features of all workflow
engines are royalty-free âan aspect that could potentially drive away members
of the scientific community. Results We have developed a set of tools that
enables the scientific community to benefit from workflow interoperability. We
developed a platform-free structured representation of parameters, inputs,
outputs of command-line tools in so-called Common Tool Descriptor documents.
We have also overcome the shortcomings and combined the features of two
royalty-free workflow engines with a substantial user community: the Konstanz
Information Miner, an engine which we see as a formidable workflow editor, and
the Grid and User Support Environment, a web-based framework able to interact
with several high-performance computing resources. We have thus created a free
and highly accessible way to design workflows on a desktop computer and
execute them on high-performance computing resources. Conclusions Our work
will not only reduce time spent on designing scientific workflows, but also
make executing workflows on remote high-performance computing resources more
accessible to technically inexperienced users. We strongly believe that our
efforts not only decrease the turnaround time to obtain scientific results but
also have a positive impact on reproducibility, thus elevating the quality of
obtained scientific results
- âŠ