38 research outputs found

    Requirements and evaluation of protocols and tools for transaction management in service centric systems

    Get PDF

    Requirements and evaluation of protocols and tools for transaction management in service centric systems

    Get PDF
    As Service Centric (SC) Systems are being increasing v adopted, new challenges and possibilities emerge. Business processes are now able to execute seamlessly across organizations and to coordinate the interaction of loosely coupled services. Often it is necessary to have transactionality for a set of business operations, but the loosely nature of such systems calls for techniques and principles that go beyond traditional ACID transactions. By analyzing existing service composition languages, tools, and needs on a classical example, we provide requirements for transactionality in Service Centric Systems and indications for developing SC systems transactionally capable

    Supporting Quality of Service in Scientific Workflows

    Get PDF
    While workflow management systems have been utilized in enterprises to support businesses for almost two decades, the use of workflows in scientific environments was fairly uncommon until recently. Nowadays, scientists use workflow systems to conduct scientific experiments, simulations, and distributed computations. However, most scientific workflow management systems have not been built using existing workflow technology; rather they have been designed and developed from scratch. Due to the lack of generality of early scientific workflow systems, many domain-specific workflow systems have been developed. Generally speaking, those domain-specific approaches lack common acceptance and tool support and offer lower robustness compared to business workflow systems. In this thesis, the use of the industry standard BPEL, a workflow language for modeling business processes, is proposed for the modeling and the execution of scientific workflows. Due to the widespread use of BPEL in enterprises, a number of stable and mature software products exist. The language is expressive (Turingcomplete) and not restricted to specific applications. BPEL is well suited for the modeling of scientific workflows, but existing implementations of the standard lack important features that are necessary for the execution of scientific workflows. This work presents components that extend an existing implementation of the BPEL standard and eliminate the identified weaknesses. The components thus provide the technical basis for use of BPEL in academia. The particular focus is on so-called non-functional (Quality of Service) requirements. These requirements include scalability, reliability (fault tolerance), data security, and cost (of executing a workflow). From a technical perspective, the workflow system must be able to interface with the middleware systems that are commonly used by the scientific workflow community to allow access to heterogeneous, distributed resources (especially Grid and Cloud resources). The major components cover exactly these requirements: Cloud Resource Provisioner Scalability of the workflow system is achieved by automatically adding additional (Cloud) resources to the workflow system’s resource pool when the workflow system is heavily loaded. Fault Tolerance Module High reliability is achieved via continuous monitoring of workflow execution and corrective interventions, such as re-execution of a failed workflow step or replacement of the faulty resource. Cost Aware Data Flow Aware Scheduler The majority of scientific workflow systems only take the performance and utilization of resources for the execution of workflow steps into account when making scheduling decisions. The presented workflow system goes beyond that. By defining preference values for the weighting of costs and the anticipated workflow execution time, workflow users may influence the resource selection process. The developed multiobjective scheduling algorithm respects the defined weighting and makes both efficient and advantageous decisions using a heuristic approach. Security Extensions Because it supports various encryption, signature and authentication mechanisms (e.g., Grid Security Infrastructure), the workflow system guarantees data security in the transfer of workflow data. Furthermore, this work identifies the need to equip workflow developers with workflow modeling tools that can be used intuitively. This dissertation presents two modeling tools that support users with different needs. The first tool, DAVO (domain-adaptable, Visual BPEL Orchestrator), operates at a low level of abstraction and allows users with knowledge of BPEL to use the full extent of the language. DAVO is a software that offers extensibility and customizability for different application domains. These features are used in the implementation of the second tool, SimpleBPEL Composer. SimpleBPEL is aimed at users with little or no background in computer science and allows for quick and intuitive development of BPEL workflows based on predefined components

    Business process model customisation using domain-driven controlled variability management and rule generation

    Get PDF
    Business process models are abstract descriptions and as such should be applicable in different situations. In order for a single process model to be reused, we need support for configuration and customisation. Often, process objects and activities are domain-specific. We use this observation and allow domain models to drive the customisation. Process variability models, known from product line modelling and manufacturing, can control this customisation by taking into account the domain models. While activities and objects have already been studied, we investigate here the constraints that govern a process execution. In order to integrate these constraints into a process model, we use a rule-based constraints language for a workflow and process model. A modelling framework will be presented as a development approach for customised rules through a feature model. Our use case is content processing, represented by an abstract ontology-based domain model in the framework and implemented by a customisation engine. The key contribution is a conceptual definition of a domain-specific rule variability language

    Dynamic adaptation of service compositions with variability models

    Full text link
    Web services run in complex contexts where arising events may compromise the quality of the whole system. Thus, it is desirable to count on autonomic mechanisms to guide the self-adaptation of service compositions according to changes in the computing infrastructure. One way to achieve this goal is by implementing variability constructs at the language level. However, this approach may become tedious, difficult to manage, and error-prone. In this paper, we propose a solution based on a semantically rich variability model to support the dynamic adaptation of service compositions. When a problematic event arises in the context, this model is leveraged for decision-making. The activation and deactivation of features in the variability model result in changes in a composition model that abstracts the underlying service composition. These changes are reflected into the service composition by adding or removing fragments of Business Process Execution Language (WS-BPEL) code, which can be deployed at runtime. In order to reach optimum adaptations, the variability model and its possible configurations are verified at design time using Constraint Programming. An evaluation demonstrates several benefits of our approach, both at design time and at runtime.This work has been developed with the support of MICINN under the project everyWare TIN2010-18011 and co-financed with ERDF.Alférez Salinas, GH.; Pelechano Ferragud, V.; Mazo, R.; Salinesi, C.; Díaz, D. (2014). Dynamic adaptation of service compositions with variability models. Journal of Systems and Software. 91:24-47. https://doi.org/10.1016/j.jss.2013.06.034S24479

    NEGOTIATION ON A NEW POLICY IN SERVICE

    Get PDF
    ABSTRACT During interactions between organizations in the fiel

    Business process model customisation using domain-driven controlled variability management and rule generation

    Get PDF
    Business process models are abstract descriptions and as such should be applicable in different situations. In order for a single process model to be reused, we need support for configuration and customisation. Often, process objects and activities are domain-specific. We use this observation and allow domain models to drive the customisation. Process variability models, known from product line modelling and manufacturing, can control this customisation by taking into account the domain models. While activities and objects have already been studied, we investigate here the constraints that govern a process execution. In order to integrate these constraints into a process model, we use a rule-based constraints language for a workflow and process model. A modelling framework will be presented as a development approach for customised rules through a feature model. Our use case is content processing, represented by an abstract ontology-based domain model in the framework and implemented by a customisation engine. The key contribution is a conceptual definition of a domain-specific rule variability language

    Transaction Management in Service-Oriented Systems: Requirements and a Proposal

    Full text link

    KPI-related monitoring, analysis, and adaptation of business processes

    Get PDF
    In today's companies, business processes are increasingly supported by IT systems. They can be implemented as service orchestrations, for example in WS-BPEL, running on Business Process Management (BPM) systems. A service orchestration implements a business process by orchestrating a set of services. These services can be arbitrary IT functionality, human tasks, or again service orchestrations. Often, these business processes are implemented as part of business-to-business collaborations spanning several participating organizations. Service choreographies focus on modeling how processes of different participants interact in such collaborations. An important aspect in BPM is performance management. Performance is measured in terms of Key Performance Indicators (KPIs), which reflect the achievement towards business goals. KPIs are based on domain-specific metrics typically reflecting the time, cost, and quality dimensions. Dealing with KPIs involves several phases, namely monitoring, analysis, and adaptation. In a first step, KPIs have to be monitored in order to evaluate the current process performance. In case monitoring shows negative results, there is a need for analyzing and understanding the reasons why KPI targets are not reached. Finally, after identifying the influential factors of KPIs, the processes have to be adapted in order to improve the performance. %The goal thereby is to enable these phases in an automated manner. This thesis presents an approach how KPIs can be monitored, analyzed, and used for adaptation of processes. The concrete contributions of this thesis are: (i) an approach for monitoring of processes and their KPIs in service choreographies; (ii) a KPI dependency analysis approach based on classification learning which enables explaining how KPIs depend on a set of influential factors; (iii) a runtime adaptation approach which combines monitoring and KPI analysis in order to enable proactive adaptation of processes for improving the KPI performance; (iv) a prototypical implementation and experiment-based evaluation.Die Ausführung von Geschäftsprozessen wird heute zunehmend durch IT-Systeme unterstützt und auf Basis einer serviceorientierten Architektur umgesetzt. Die Prozesse werden dabei häufig als Service Orchestrierungen implementiert, z.B. in WS-BPEL. Eine Service Orchestrierung interagiert mit Services, die automatisiert oder durch Menschen ausgeführt werden, und wird durch eine Prozessausführungsumgebung ausgeführt. Darüber hinaus werden Geschäftsprozesse oft nicht in Isolation ausgeführt sondern interagieren mit weiteren Geschäftsprozessen, z.B. als Teil von Business-to-Business Beziehungen. Die Interaktionen der Prozesse werden dabei in Service Choreographien modelliert. Ein wichtiger Aspekt des Geschäftsprozessmanagements ist die Optimierung der Prozesse in Bezug auf ihre Performance, die mit Hilfe von Key Performance Indicators (KPIs) gemessen wird. KPIs basieren auf Prozessmetriken, die typischerweise die Dimensionen Zeit, Kosten und Qualität abbilden, und evaluieren diese in Bezug auf die Erreichung von Unternehmenszielen. Die Optimierung der Prozesse in Bezug auf ihre KPIs umfasst mehrere Phasen. Im ersten Schritt müssen KPIs durch Monitoring der Prozesse zur Laufzeit erhoben werden. Falls die KPI Werte nicht zufriedenstellend sind, werden im nächsten Schritt die Faktoren analysiert, die die KPI Werte beeinflussen. Schließlich werden auf Basis dieser Analyse die Prozesse angepasst um die KPIs zu verbessern. In dieser Arbeit wird ein integrierter Ansatz für das Monitoring, die Analyse und automatisierte Adaption von Prozessen mit dem Ziel der Optimierung hinsichtlich der KPIs vorgestellt. Die Beiträge der Arbeit sind wie folgt: (i) ein Ansatz zum Monitoring von KPIs über einzelne Prozesse hinweg in Service Choreographien, (ii) ein Ansatz zur Analyse von beeinflussenden Faktoren von KPIs auf Basis von Entscheidungsbäumen, (iii) ein Ansatz zur automatisierten, proaktiven Adaption von Prozessen zur Laufzeit auf Basis des Monitorings und der KPI Analyse, (iv) eine prototypische Implementierung und experimentelle Evaluierung

    A middleware for service oriented computing in dynamic environments

    Get PDF
    Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do Grau de Mestre em Engenharia InformáticaThe last years have witnessed a convergence on the SOA paradigm from industrial processes enterprises (like logistics or manufacturing), using standards for data and communication. SOA promotes reusability, interoperability and loose-coupling of applications. The convergence towards SOA shows that we are leading to an infrastructure composed by several heterogeneous devices, the "Internet of Things". In this infrastructure everything can be abstracted as a service, such as household appliances, mobile devices, or industrial machinery. It is expected that this trend will continue, and as these devices interoperate in service composition, new functionalities may be discovered. Existing approaches for service composition, namely in business processes, are too bound to BPEL. Several alternatives and extensions of BPEL have been developed, but they feel more like patches than solutions. In this context SeDeUse [29] model has been proposed as an exercise to define new language constructs promoting a separation from service awareness and use. The model also relies on a middleware layer to support the execution of the application in dynamic environments. The goal of this dissertation is to instantiate the SeDeUse model in a widely used programming language in order to provide a framework for its assessment and for its future development. The work consists on implementing a concrete syntax for the model, a compilation process, and a middleware layer. The syntax contains the new language constructs that are integrated in the hosting language. The compilation process is responsible for service definition and code generation. Finally, the middleware acts as a support for the application (generated code) requests. We have seamlessly integrated SeDeUse in the Java programming language and developed a functional prototype. To assess the prototype capability, three scenarios were developed in which we demonstrated that our implementation provides a new, and simpler, approach for abstracting resources as services
    corecore