45,887 research outputs found

    On the construction of decentralised service-oriented orchestration systems

    Get PDF
    Modern science relies on workflow technology to capture, process, and analyse data obtained from scientific instruments. Scientific workflows are precise descriptions of experiments in which multiple computational tasks are coordinated based on the dataflows between them. Orchestrating scientific workflows presents a significant research challenge: they are typically executed in a manner such that all data pass through a centralised computer server known as the engine, which causes unnecessary network traffic that leads to a performance bottleneck. These workflows are commonly composed of services that perform computation over geographically distributed resources, and involve the management of dataflows between them. Centralised orchestration is clearly not a scalable approach for coordinating services dispersed across distant geographical locations. This thesis presents a scalable decentralised service-oriented orchestration system that relies on a high-level data coordination language for the specification and execution of workflows. This system’s architecture consists of distributed engines, each of which is responsible for executing part of the overall workflow. It exploits parallelism in the workflow by decomposing it into smaller sub-workflows, and determines the most appropriate engines to execute them using computation placement analysis. This permits the workflow logic to be distributed closer to the services providing the data for execution, which reduces the overall data transfer in the workflow and improves its execution time. This thesis provides an evaluation of the presented system which concludes that decentralised orchestration provides scalability benefits over centralised orchestration, and improves the overall performance of executing a service-oriented workflow

    Quality-aware model-driven service engineering

    Get PDF
    Service engineering and service-oriented architecture as an integration and platform technology is a recent approach to software systems integration. Quality aspects ranging from interoperability to maintainability to performance are of central importance for the integration of heterogeneous, distributed service-based systems. Architecture models can substantially influence quality attributes of the implemented software systems. Besides the benefits of explicit architectures on maintainability and reuse, architectural constraints such as styles, reference architectures and architectural patterns can influence observable software properties such as performance. Empirical performance evaluation is a process of measuring and evaluating the performance of implemented software. We present an approach for addressing the quality of services and service-based systems at the model-level in the context of model-driven service engineering. The focus on architecture-level models is a consequence of the black-box character of services

    A Dataflow Language for Decentralised Orchestration of Web Service Workflows

    Full text link
    Orchestrating centralised service-oriented workflows presents significant scalability challenges that include: the consumption of network bandwidth, degradation of performance, and single points of failure. This paper presents a high-level dataflow specification language that attempts to address these scalability challenges. This language provides simple abstractions for orchestrating large-scale web service workflows, and separates between the workflow logic and its execution. It is based on a data-driven model that permits parallelism to improve the workflow performance. We provide a decentralised architecture that allows the computation logic to be moved "closer" to services involved in the workflow. This is achieved through partitioning the workflow specification into smaller fragments that may be sent to remote orchestration services for execution. The orchestration services rely on proxies that exploit connectivity to services in the workflow. These proxies perform service invocations and compositions on behalf of the orchestration services, and carry out data collection, retrieval, and mediation tasks. The evaluation of our architecture implementation concludes that our decentralised approach reduces the execution time of workflows, and scales accordingly with the increasing size of data sets.Comment: To appear in Proceedings of the IEEE 2013 7th International Workshop on Scientific Workflows, in conjunction with IEEE SERVICES 201

    Adaptive service discovery on service-oriented and spontaneous sensor systems

    Get PDF
    Service-oriented architecture, Spontaneous networks, Self-organisation, Self-configuration, Sensor systems, Social patternsNatural and man-made disasters can significantly impact both people and environments. Enhanced effect can be achieved through dynamic networking of people, systems and procedures and seamless integration of them to fulfil mission objectives with service-oriented sensor systems. However, the benefits of integration of services will not be realised unless we have a dependable method to discover all required services in dynamic environments. In this paper, we propose an Adaptive and Efficient Peer-to-peer Search (AEPS) approach for dependable service integration on service-oriented architecture based on a number of social behaviour patterns. In the AEPS network, the networked nodes can autonomously support and co-operate with each other in a peer-to-peer (P2P) manner to quickly discover and self-configure any services available on the disaster area and deliver a real-time capability by self-organising themselves in spontaneous groups to provide higher flexibility and adaptability for disaster monitoring and relief

    Autonomous resource-aware scheduling of large-scale media workflows

    Get PDF
    The media processing and distribution industry generally requires considerable resources to be able to execute the various tasks and workflows that constitute their business processes. The latter processes are often tied to critical constraints such as strict deadlines. A key issue herein is how to efficiently use the available computational, storage and network resources to be able to cope with the high work load. Optimizing resource usage is not only vital to scalability, but also to the level of QoS (e.g. responsiveness or prioritization) that can be provided. We designed an autonomous platform for scheduling and workflow-to-resource assignment, taking into account the different requirements and constraints. This paper presents the workflow scheduling algorithms, which consider the state and characteristics of the resources (computational, network and storage). The performance of these algorithms is presented in detail in the context of a European media processing and distribution use-case

    A Survey on Evaluation Factors for Business Process Management Technology

    Get PDF
    Estimating the value of business process management (BPM) technology is a difficult task to accomplish. Computerized business processes have a strong impact on an organization, and BPM projects have a long-term cost amortization. To systematically analyze BPM technology from an economic-driven perspective, we are currently developing an evaluation framework in the EcoPOST project. In order to empirically validate the relevance of assumed evaluation factors (e.g., process knowledge, business process redesign, end user fears, and communication) we have conducted an online survey among 70 BPM experts from more than 50 industrial and academic organizations. This paper summarizes the results of this survey. Our results help both researchers and practitioners to better understand the evaluation factors that determine the value of BPM technology
    corecore