5,193 research outputs found

    A Taxonomy of Workflow Management Systems for Grid Computing

    Full text link
    With the advent of Grid and application technologies, scientists and engineers are building more and more complex applications to manage and process large data sets, and execute scientific experiments on distributed resources. Such application scenarios require means for composing and executing complex workflows. Therefore, many efforts have been made towards the development of workflow management systems for Grid computing. In this paper, we propose a taxonomy that characterizes and classifies various approaches for building and executing workflows on Grids. We also survey several representative Grid workflow systems developed by various projects world-wide to demonstrate the comprehensiveness of the taxonomy. The taxonomy not only highlights the design and engineering similarities and differences of state-of-the-art in Grid workflow systems, but also identifies the areas that need further research.Comment: 29 pages, 15 figure

    Modelling information flow for organisations delivering microsystems technology

    Get PDF
    Motivated by recent growth and applications of microsystems technology (MST), companies within the MST domain are beginning to explore avenues for understanding, maintaining and improving information flow, within their organisations and to/from customers, with a view to enhancing delivery performance. Delivery for organisations is the flow of goods from sellers to buyers and a classic approach to understanding information flow is via the use of modelling techniques. Cont/d

    Distributed Saturation

    Get PDF
    The Saturation algorithm for symbolic state-space generation, has been a recent break-through in the exhaustive veri cation of complex systems, in particular globally-asyn- chronous/locally-synchronous systems. The algorithm uses a very compact Multiway Decision Diagram (MDD) encoding for states and the fastest symbolic exploration algo- rithm to date. The distributed version of Saturation uses the overall memory available on a network of workstations (NOW) to efficiently spread the memory load during the highly irregular exploration. A crucial factor in limiting the memory consumption during the symbolic state-space generation is the ability to perform garbage collection to free up the memory occupied by dead nodes. However, garbage collection over a NOW requires a nontrivial communication overhead. In addition, operation cache policies become critical while analyzing large-scale systems using the symbolic approach. In this technical report, we develop a garbage collection scheme and several operation cache policies to help on solving extremely complex systems. Experiments show that our schemes improve the performance of the original distributed implementation, SmArTNow, in terms of time and memory efficiency

    Semantics and Verification of UML Activity Diagrams for Workflow Modelling

    Get PDF
    This thesis defines a formal semantics for UML activity diagrams that is suitable for workflow modelling. The semantics allows verification of functional requirements using model checking. Since a workflow specification prescribes how a workflow system behaves, the semantics is defined and motivated in terms of workflow systems. As workflow systems are reactive and coordinate activities, the defined semantics reflects these aspects. In fact, two formal semantics are defined, which are completely different. Both semantics are defined directly in terms of activity diagrams and not by a mapping of activity diagrams to some existing formal notation. The requirements-level semantics, based on the Statemate semantics of statecharts, assumes that workflow systems are infinitely fast w.r.t. their environment and react immediately to input events (this assumption is called the perfect synchrony hypothesis). The implementation-level semantics, based on the UML semantics of statecharts, does not make this assumption. Due to the perfect synchrony hypothesis, the requirements-level semantics is unrealistic, but easy to use for verification. On the other hand, the implementation-level semantics is realistic, but difficult to use for verification. A class of activity diagrams and a class of functional requirements is identified for which the outcome of the verification does not depend upon the particular semantics being used, i.e., both semantics give the same result. For such activity diagrams and such functional requirements, the requirements-level semantics is as realistic as the implementation-level semantics, even though the requirements-level semantics makes the perfect synchrony hypothesis. The requirements-level semantics has been implemented in a verification tool. The tool interfaces with a model checker by translating an activity diagram into an input for a model checker according to the requirements-level semantics. The model checker checks the desired functional requirement against the input model. If the model checker returns a counterexample, the tool translates this counterexample back into the activity diagram by highlighting a path corresponding to the counterexample. The tool supports verification of workflow models that have event-driven behaviour, data, real time, and loops. Only model checkers supporting strong fairness model checking turn out to be useful. The feasibility of the approach is demonstrated by using the tool to verify some real-life workflow models

    Methodology for automated Petri Net model generation to support Reliability Modelling

    Get PDF
    As the complexity of engineering systems and processes increases, determining their optimal performance also becomes increasingly complex. There are various reliability methods available to model performance but generating the models can become a significant task that is cumbersome, error-prone and tedious. Hence, over the years, work has been undertaken into automatically generating reliability models in order to detect the most critical components and design errors at an early stage, supporting alternative designs. Earlier work lacks full automation resulting in semi-automated methods since they require user intervention to import system information to the algorithm, focus on specific domains and cannot accurately model systems or processes with control loops and dynamic features. This thesis develops a novel method that can generate reliability models for complex systems and processes, based on Petri Net models. The process has been fully automated with software developed that extracts the information required for the model from a topology diagram that describes the system or process considered and generates the corresponding mathematical and graphical representations of the Petri Net model. Such topology diagrams are used in industrial sectors, ranging from aerospace and automotive engineering to finance, defence, government, entertainment and telecommunications. Complex real-life scenarios are studied to demonstrate the application of the proposed method, followed by the verification, validation and simulation of the developed Petri Net models. Thus, the proposed method is seen to be a powerful tool to automatically obtain the PN modelling formalism from a topology diagram, commonly used in industry, by: - Handling and efficiently modelling systems and processes with a large number of components and activities respectively, dependent events and control loops. - Providing generic domain applicability. - Providing software independence by generating models readily understandable by the user without requiring further manipulation by any industrial software. Finally, the method documented in this thesis enables engineers to conduct reliability and performance analysis in a timely manner that ensures the results feed into the design process

    A Design Process for Process-based Knowledge Management Systems

    Get PDF
    In order to gain sustainable competitive advantage in today’s knowledge economy, organizations are looking beyond routine transactional workflow processes to support knowledge-intensive processes. Traditional business process management systems are effective in providing coordination support, but are not geared towards providing relevant knowledge support as well. Also, knowledge management systems are used in an ad hoc manner without explicitly linking them to the underlying organizational processes. Process-based knowledge management (PKM) systems have emerged as a potential solution to support knowledge-intensive processes. However, design guidelines for developing PKM systems are minimal. This paper highlights this research problem, identifies kernel theories governing the design and development of PKM systems, and synthesizes various kernel theories to propose a comprehensive design theory for PKM systems. Feasibility and a comparative evaluation of the proposed design theory are also discussed

    Performance requirements verification during software systems development

    Get PDF
    Requirements verification refers to the assurance that the implemented system reflects the specified requirements. Requirement verification is a process that continues through the life cycle of the software system. When the software crisis hit in 1960, a great deal of attention was placed on the verification of functional requirements, which were considered to be of crucial importance. Over the last decade, researchers have addressed the importance of integrating non-functional requirement in the verification process. An important non-functional requirement for software is performance. Performance requirement verification is known as Software Performance Evaluation. This thesis will look at performance evaluation of software systems. The performance evaluation of software systems is a hugely valuable task, especially in the early stages of a software project development. Many methods for integrating performance analysis into the software development process have been proposed. These methodologies work by utilising the software architectural models known in the software engineering field by transforming these into performance models, which can be analysed to gain the expected performance characteristics of the projected system. This thesis aims to bridge the knowledge gap between performance and software engineering domains by introducing semi-automated transformation methodologies. These are designed to be generic in order for them to be integrated into any software engineering development process. The goal of these methodologies is to provide performance related design guidance during the system development. This thesis introduces two model transformation methodologies. These are the improved state marking methodology and the UML-EQN methodology. It will also introduce the UML-JMT tool which was built to realise the UML-EQN methodology. With the help of automatic design models to performance model algorithms introduced in the UML-EQN methodology, a software engineer with basic knowledge of performance modelling paradigm can conduct a performance study on a software system design. This was proved in a qualitative study where the methodology and the tool deploying this methodology were tested by software engineers with varying levels of background, experience and from different sectors of the software development industry. The study results showed an acceptance for this methodology and the UML-JMT tool. As performance verification is a part of any software engineering methodology, we have to define frame works that would deploy performance requirements validation in the context of software engineering. Agile development paradigm was the result of changes in the overall environment of the IT and business worlds. These techniques are based on iterative development, where requirements, designs and developed programmes evolve continually. At present, the majority of literature discussing the role of requirements engineering in agile development processes seems to indicate that non-functional requirements verification is an unchartered territory. CPASA (Continuous Performance Assessment of Software Architecture) was designed to work in software projects where the performance can be affected by changes in the requirements and matches the main practices of agile modelling and development. The UML-JMT tool was designed to deploy the CPASA Performance evaluation tests

    Privacy Meets Explainability: A Comprehensive Impact Benchmark

    Full text link
    Since the mid-10s, the era of Deep Learning (DL) has continued to this day, bringing forth new superlatives and innovations each year. Nevertheless, the speed with which these innovations translate into real applications lags behind this fast pace. Safety-critical applications, in particular, underlie strict regulatory and ethical requirements which need to be taken care of and are still active areas of debate. eXplainable AI (XAI) and privacy-preserving machine learning (PPML) are both crucial research fields, aiming at mitigating some of the drawbacks of prevailing data-hungry black-box models in DL. Despite brisk research activity in the respective fields, no attention has yet been paid to their interaction. This work is the first to investigate the impact of private learning techniques on generated explanations for DL-based models. In an extensive experimental analysis covering various image and time series datasets from multiple domains, as well as varying privacy techniques, XAI methods, and model architectures, the effects of private training on generated explanations are studied. The findings suggest non-negligible changes in explanations through the introduction of privacy. Apart from reporting individual effects of PPML on XAI, the paper gives clear recommendations for the choice of techniques in real applications. By unveiling the interdependencies of these pivotal technologies, this work is a first step towards overcoming the remaining hurdles for practically applicable AI in safety-critical domains.Comment: Under Submissio
    corecore