4,510 research outputs found

    A Taxonomy of Workflow Management Systems for Grid Computing

    Full text link
    With the advent of Grid and application technologies, scientists and engineers are building more and more complex applications to manage and process large data sets, and execute scientific experiments on distributed resources. Such application scenarios require means for composing and executing complex workflows. Therefore, many efforts have been made towards the development of workflow management systems for Grid computing. In this paper, we propose a taxonomy that characterizes and classifies various approaches for building and executing workflows on Grids. We also survey several representative Grid workflow systems developed by various projects world-wide to demonstrate the comprehensiveness of the taxonomy. The taxonomy not only highlights the design and engineering similarities and differences of state-of-the-art in Grid workflow systems, but also identifies the areas that need further research.Comment: 29 pages, 15 figure

    Plasma Edge Kinetic-MHD Modeling in Tokamaks Using Kepler Workflow for Code Coupling, Data Management and Visualization

    Get PDF
    A new predictive computer simulation tool targeting the development of the H-mode pedestal at the plasma edge in tokamaks and the triggering and dynamics of edge localized modes (ELMs) is presented in this report. This tool brings together, in a coordinated and effective manner, several first-principles physics simulation codes, stability analysis packages, and data processing and visualization tools. A Kepler workflow is used in order to carry out an edge plasma simulation that loosely couples the kinetic code, XGC0, with an ideal MHD linear stability analysis code, ELITE, and an extended MHD initial value code such as M3D or NIMROD. XGC0 includes the neoclassical ion-electron-neutral dynamics needed to simulate pedestal growth near the separatrix. The Kepler workflow processes the XGC0 simulation results into simple images that can be selected and displayed via the Dashboard, a monitoring tool implemented in AJAX allowing the scientist to track computational resources, examine running and archived jobs, and view key physics data, all within a standard Web browser. The XGC0 simulation is monitored for the conditions needed to trigger an ELM crash by periodically assessing the edge plasma pressure and current density profiles using the ELITE code. If an ELM crash is triggered, the Kepler workflow launches the M3D code on a moderate-size Opteron cluster to simulate the nonlinear ELM crash and to compute the relaxation of plasma profiles after the crash. This process is monitored through periodic outputs of plasma fluid quantities that are automatically visualized with AVS/Express and may be displayed on the Dashboard. Finally, the Kepler workflow archives all data outputs and processed images using HPSS, as well as provenance information about the software and hardware used to create the simulation. The complete process of preparing, executing and monitoring a coupled-code simulation of the edge pressure pedestal buildup and the ELM cycle using the Kepler scientific workflow system is described in this paper

    Collaborative Systems – Finite State Machines

    Get PDF
    In this paper the finite state machines are defined and formalized. There are presented the collaborative banking systems and their correspondence is done with finite state machines. It highlights the role of finite state machines in the complexity analysis and performs operations on very large virtual databases as finite state machines. It builds the state diagram and presents the commands and documents transition between the collaborative systems states. The paper analyzes the data sets from Collaborative Multicash Servicedesk application and performs a combined analysis in order to determine certain statistics. Indicators are obtained, such as the number of requests by category and the load degree of an agent in the collaborative system.Collaborative System, Finite State Machine, Inputs, States, Outputs

    DSL-based Interoperability and Integration in the Smart Manufacturing Digital Thread

    Get PDF
    In the industry 4.0 ecosystem, a Digital Thread connects the data and processes for smarter manufacturing. It provides an end to end integration of the various digital entities thus fostering interoperability, with the aim to design and deliver complex and heterogeneous interconnected systems. We develop a service oriented domain specific Digital Thread platform in a Smart Manufacturing research and prototyping context. We address the principles, architecture and individual aspects of a growing Digital Thread platform. It conforms to the best practices of coordination languages, integration and interoperability of external services from various platforms, and provides orchestration in a formal methods based, low-code and graphical model driven fashion. We chose the Cinco products DIME and Pyrus as the underlying IT platforms for our Digital Thread solution to serve the needs of the applications addressed: manufacturing analytics and predictive maintenance are in fact core capabilities for the success of smart manufacturing operations. In this regard, we extend the capabilities of these two platforms in the vertical domains of data persistence, IoT connectivity and analytics, to support the basic operations of smart manufacturing. External native DSLs provide the data and capability integrations through families of SIBs. The small examples constitute blueprints for the methodology, addressing the knowledge, terminology and concerns of domain stakeholders. Over time, we expect reuse to increase, reducing the new integration and development effort to a progressively smaller portion of the models and code needed for at least the most standard application

    Multi-paradigm modelling for cyber–physical systems: a descriptive framework

    Get PDF
    The complexity of cyber–physical systems (CPSS) is commonly addressed through complex workflows, involving models in a plethora of different formalisms, each with their own methods, techniques, and tools. Some workflow patterns, combined with particular types of formalisms and operations on models in these formalisms, are used successfully in engineering practice. To identify and reuse them, we refer to these combinations of workflow and formalism patterns as modelling paradigms. This paper proposes a unifying (Descriptive) Framework to describe these paradigms, as well as their combinations. This work is set in the context of Multi-Paradigm Modelling (MPM), which is based on the principle to model every part and aspect of a system explicitly, at the most appropriate level(s) of abstraction, using the most appropriate modelling formalism(s) and workflows. The purpose of the Descriptive Framework presented in this paper is to serve as a basis to reason about these formalisms, workflows, and their combinations. One crucial part of the framework is the ability to capture the structural essence of a paradigm through the concept of a paradigmatic structure. This is illustrated informally by means of two example paradigms commonly used in CPS: Discrete Event Dynamic Systems and Synchronous Data Flow. The presented framework also identifies the need to establish whether a paradigm candidate follows, or qualifies as, a (given) paradigm. To illustrate the ability of the framework to support combining paradigms, the paper shows examples of both workflow and formalism combinations. The presented framework is intended as a basis for characterisation and classification of paradigms, as a starting point for a rigorous formalisation of the framework (allowing formal analyses), and as a foundation for MPM tool development

    Possibilistic Information Flow Control for Workflow Management Systems

    Full text link
    In workflows and business processes, there are often security requirements on both the data, i.e. confidentiality and integrity, and the process, e.g. separation of duty. Graphical notations exist for specifying both workflows and associated security requirements. We present an approach for formally verifying that a workflow satisfies such security requirements. For this purpose, we define the semantics of a workflow as a state-event system and formalise security properties in a trace-based way, i.e. on an abstract level without depending on details of enforcement mechanisms such as Role-Based Access Control (RBAC). This formal model then allows us to build upon well-known verification techniques for information flow control. We describe how a compositional verification methodology for possibilistic information flow can be adapted to verify that a specification of a distributed workflow management system satisfies security requirements on both data and processes.Comment: In Proceedings GraMSec 2014, arXiv:1404.163
    corecore