23 research outputs found

    A QoS-Aware BPEL Framework for Service Selection and Composition Using QoS Properties

    Get PDF
    Abstract—The promise of service oriented computing, and the availability of web services in particular, promote delivery of services and creation of new services composed of existing services – service components are assembled to achieve integrated computational goals. Business organizations strive to utilize the services and to provide new service solutions and they will need appropriate tools to achieve these goals. As web and internet based services grow into clouds, inter-dependency of services and their complexity increases tremendously. The cloud ontology depicts service layers from a high-level, such as Application and Software, to a low-level, such as Infrastructure and Platform. Each component resides at one layer can be useful to others as a service. It hints the amount of complexity resulting from not only horizontal but also vertical integrations in building and deploying a composite service. Our framework tackles the complexity of the selection and composition issues with additional qualitative information to the service descriptions using Business Process Execution Language (BPEL). Engineers can use BPEL to explore design options, and have the QoS properties analyzed for the design. QoS properties of each service are annotated with our extension to Web Service Description Language (WSDL). In this paper, we describe our framework and illustrate its application to one QoS property, performance. We translate BPEL orchestration and choreography into appropriate queuing networks, and analyze the resulting model to obtain the performance properties of the composed service. Our framework is also designed to support utilizations of other QoS extensions of WSDL, adaptable business logic languages, and composition models for other QoS properties

    Integrated CHOReOS middleware - Enabling large-scale, QoS-aware adaptive choreographies

    Get PDF
    This document describes the final implementation and the evaluation of the CHOReOS middleware. Evaluation is achieved both via the use of the middleware on CHOReOS use-cases and via synthetic experiments and simulation. The conclusion was that the implementation of the CHOReOS middleware has achieved a good level of maturity for an open source project and it is ready to be used in real-world, complex choreographies

    Concepts for handling heterogeneous data transformation logic and their integration with TraDE middleware

    Get PDF
    The concept of programming-in-the-Large became a substantial part of modern computerbased scientific research with an advent of web services and the concept of orchestration languages. While the notions of workflows and service choreographies help to reduce the complexity by providing means to support the communication between involved participants, the process still remains generally complex. The TraDE Middleware and underlying concepts were introduced in order to provide means for performing the modeled data exchange across choreography participants in a transparent and automated fashion. However, in order to achieve both transparency and automation, the TraDE Middleware must be capable of transforming the data along its path. The data transformation’s transparency can be difficult to achieve due to various factors including the diversity of required execution environments and complicated configuration processes as well as the heterogeneity of data transformation software which results in tedious integration processes often involving the manual wrapping of software. Having a method of handling data transformation applications in a standardized manner can help to simplify the process of modeling and executing scientific service choreographies with the TraDE concepts applied. In this master thesis we analyze various aspects of this problem and conceptualize an extensible framework for handling the data transformation applications. The resulting prototypical implementation of the presented framework provides means to address data transformation applications in a standardized manner

    Orchestration de web services fiables

    Get PDF
    L’Informatique Orienté Services représente un paradigme pour construire des applications distribuées sur Internet. L’Architecture Orientée Services(SOA) est un style architectural qui permet le développement de ces applications à base de services. Au cours de la dernière décennie, l’orchestration des services Web est devenue un domaine très actif dans la recherche scientifique et académique. Bien que de nombreux défis liés à l’orchestration aient été abordés, la fiabilité de l’orchestration et de sa vérification restent encore un sujet ouvert, prérequis et important de fait que ces orchestrations affectent aujourd’hui plusieurs activités quotidiennes. Cette thèse focalise sur le sujet d’orchestration des Services Web Fiables. En particulier, elle contribue avec un ensemble d’approches, de techniques et d’outils pour améliorer la sélection et l’orchestration des services web fiables. Premièrement, elle affine les phases du cycle de vie d’orchestration de services web afin d’assurer une vérification continuée de fiabilité lors des phases de conception et d’exécution. En outre, elle propose une architecture conceptuelle basée sur un registre de service amélioré, pour la mise en œuvre d’orchestrations fiables. Deuxièmement, elle présente une approche de mesure de similarité entre les services web. L’approche repose sur la comparaison des interfaces WSDL de services. L’approche sert à identifier les relations de similarité, de substituabilité et de composabilité entre services. L’outil WSSIM a été développé pour mettre en œuvre l’approche proposée. Pour validation, l’outil a été expérimenté avec un ensemble important de services web réels. Troisièmement, la thèse contribue avec une approche pour l’identification des substituts de services simples et complexes. L’approche utilise les techniques de mesure de similarité, la classification de service avec FCA et l’analyse de fiabilité pour identifier et sélectionner les meilleures substitutes. Un ensemble d’algorithmes aient été proposés pour décrire le processus d’identification. Quatrièmement, pour examiner la réputation des services comme un autre critère de fiabilité, la thèse introduit un Framework et un modèle mathématique pour la gestion de réputation de service We

    Data-driven conceptual modeling: how some knowledge drivers for the enterprise might be mined from enterprise data

    Get PDF
    As organizations perform their business, they analyze, design and manage a variety of processes represented in models with different scopes and scale of complexity. Specifying these processes requires a certain level of modeling competence. However, this condition does not seem to be balanced with adequate capability of the person(s) who are responsible for the task of defining and modeling an organization or enterprise operation. On the other hand, an enterprise typically collects various records of all events occur during the operation of their processes. Records, such as the start and end of the tasks in a process instance, state transitions of objects impacted by the process execution, the message exchange during the process execution, etc., are maintained in enterprise repositories as various logs, such as event logs, process logs, effect logs, message logs, etc. Furthermore, the growth rate in the volume of these data generated by enterprise process execution has increased manyfold in just a few years. On top of these, models often considered as the dashboard view of an enterprise. Models represents an abstraction of the underlying reality of an enterprise. Models also served as the knowledge driver through which an enterprise can be managed. Data-driven extraction offers the capability to mine these knowledge drivers from enterprise data and leverage the mined models to establish the set of enterprise data that conforms with the desired behaviour. This thesis aimed to generate models or knowledge drivers from enterprise data to enable some type of dashboard view of enterprise to provide support for analysts. The rationale for this has been started as the requirement to improve an existing process or to create a new process. It was also mentioned models can also serve as a collection of effectors through which an organization or an enterprise can be managed. The enterprise data refer to above has been identified as process logs, effect logs, message logs, and invocation logs. The approach in this thesis is to mine these logs to generate process, requirement, and enterprise architecture models, and how goals get fulfilled based on collected operational data. The above a research question has been formulated as whether it is possible to derive the knowledge drivers from the enterprise data, which represent the running operation of the enterprise, or in other words, is it possible to use the available data in the enterprise repository to generate the knowledge drivers? . In Chapter 2, review of literature that can provide the necessary background knowledge to explore the above research question has been presented. Chapter 3 presents how process semantics can be mined. Chapter 4 suggest a way to extract a requirements model. The Chapter 5 presents a way to discover the underlying enterprise architecture and Chapter 6 presents a way to mine how goals get orchestrated. Overall finding have been discussed in Chapter 7 to derive some conclusions

    Architecture-based Evolution of Dependable Software-intensive Systems

    Get PDF
    This cumulative habilitation thesis, proposes concepts for (i) modelling and analysing dependability based on architectural models of software-intensive systems early in development, (ii) decomposition and composition of modelling languages and analysis techniques to enable more flexibility in evolution, and (iii) bridging the divergent levels of abstraction between data of the operation phase, architectural models and source code of the development phase

    LifeWatch deliverable 5.1.3: Technical construction plan –Reference Model

    Get PDF
    The LifeWatch Reference Model (LifeWatch-RM) provides a common conceptual framework for understanding the significant relations and key characteristics of the Information and Communications Technologies (ICT) elements of LifeWatch that should appear consistently across different implementations. Its intention is to represent a common view of the ICT dimension between all those involved in and contributing to the LifeWatch Research Infrastructure and to provide guidelines for the construction and management process. The LifeWatch-RM defines a number of components and architectural concepts as a basis for the future LifeWatch Architecture. It is neither a blueprint nor does it define a technological mapping, but identifies some key aspects and components that should be present in the final implementation of the LifeWatch System

    Investigations into the model driven design of distribution patterns for web service compositions

    Get PDF
    Increasingly, distributed systems are being used to provide enterprise level solutions with high scalability and fault tolerance These solutins are often built using Web servces that are composed to perform useful business functions Acceptance of these composed systems is often constrained by a number of non-functional properties of the system such as availability, scalability and performance There are a number of drstribution patterns that each exhibit different non-functional charactmstics These patterns are re-occuring distribution schemes that express how a system is to be assembled and subsequently deployed. Traditional approaches to development of Web service compositions exhibit a number of Issues Firstly, Web service composition development is often ad-hoc and requires considerable low level coding effort for realisatlon Such systems often exhibit fixed architectures, making maintenance difficult and error prone Additionally, a number of the non-funchonal reqwements cannot be easily assessed by exammng low level code. In this thesis we explicitly model the compositional aspects of Web service compositions usmg UML Activity diagrams Ths approach uses a modehng and transformation framework, based on Model Dnven Software Development (MDSD), going from high level models to an executable system The framework is guided by a methodological framework whose primary artifact is a distribution pattern model, chosen from the supplied catalog. Our modelling and transfomation framework improves the development process of Web service compositions, with respect to a number of criteria, when compared to the traditional handcrafted approach Specifically, we negate the coding effort traditionally associated with Web service composition development Maintenance overheads of the solution are also slgnificantly reduced, while improved mutability 1s achieved through a flexible architecture when compared with existing tools We also improve the product output from the development process by exposing the non-functional runtime properties of Web service compositlons using distribution patterns
    corecore