920 research outputs found

    Genetic Programming for QoS-Aware Data-Intensive Web Service Composition and Execution

    No full text
    Web service composition has become a promising technique to build powerful enterprise applications by making use of distributed services with different functions. In the age of big data, more and more web services are created to deal with a large amount of data, which are called data-intensive services. Due to the explosion in the volume of data, providing efficient approaches to composing data-intensive services will become more and more important in the field of service-oriented computing. Meanwhile, as numerous web services have been emerging to offer identical or similar functionality on the Internet, web service composition is usually performed with end-to-end Quality of Service (QoS) properties which are adopted to describe the non-functional properties (e.g., response time, execution cost, reliability, etc.) of a web service. In addition, the executions of composite web services are typically coordinated by a centralized workflow engine. As a result, the centralized execution paradigm suffers from inefficient communication and a single point of failure. This is particularly problematic in the context of data-intensive processes. To that end, more decentralized and flexible execution paradigms are required for the execution of data-intensive applications. From a computational point of view, the problems of QoS-aware data-intensive web service composition and execution can be characterised as complex, large-scale, constrained and multi-objective optimization problems. Therefore, genetic programming (GP) based solutions are presented in this thesis to address the problems. A series of simulation experiments are provided to demonstrate the performance of the proposed approaches, and the empirical observations are also described in this thesis. Firstly, we propose a hybrid approach that integrates the local search procedure of tabu search into the global search process of GP to solving the problem of QoS-aware data-intensive web service composition. A mathematical model is developed for considering the mass data transmission across different component services in a data-intensive service composition. The experimental results show that our proposed approach can provide better performance than the standard GP approach and two traditional optimization methods. Next, a many-objective evolutionary approach is proposed for tackling the QoS-aware data-intensive service composition problem having more than three competing quality objectives. In this approach, the original search space of the problem is reduced before a recently developed many-objective optimization algorithm, NSGA-III, is adopted to solve the many-objective optimization problem. The experimental results demonstrate the effectiveness of our approach, as well as its superiority than existing single-objective and multi-objective approaches. Finally, a GP-based approach to partitioning a composite data-intensive service for decentralized execution is put forth in this thesis. Similar to the first problem, a mathematical model is developed for estimating the communication overhead inside a partition and across the partitions. The data and control dependencies in the original composite web service can be properly preserved in the deployment topology generated by our approach. Compared with two existing heuristic algorithms, the proposed approach exhibits better scalability and it is more suitable for large-scale partitioning problems

    Workflow scheduling for service oriented cloud computing

    Get PDF
    Service Orientation (SO) and grid computing are two computing paradigms that when put together using Internet technologies promise to provide a scalable yet flexible computing platform for a diverse set of distributed computing applications. This practice gives rise to the notion of a computing cloud that addresses some previous limitations of interoperability, resource sharing and utilization within distributed computing. In such a Service Oriented Computing Cloud (SOCC), applications are formed by composing a set of services together. In addition, hierarchical service layers are also possible where general purpose services at lower layers are composed to deliver more domain specific services at the higher layer. In general an SOCC is a horizontally scalable computing platform that offers its resources as services in a standardized fashion. Workflow based applications are a suitable target for SOCC where workflow tasks are executed via service calls within the cloud. One or more workflows can be deployed over an SOCC and their execution requires scheduling of services to workflow tasks as the task become ready following their interdependencies. In this thesis heuristics based scheduling policies are evaluated for scheduling workflows over a collection of services offered by the SOCC. Various execution scenarios and workflow characteristics are considered to understand the implication of the heuristic based workflow scheduling

    Novel optimization schemes for service composition in the cloud using learning automata-based matrix factorization

    Get PDF
    A thesis submitted to the University of Bedfordshire, in partial fulfilment of the requirements for the degree of Doctor of PhilosophyService Oriented Computing (SOC) provides a framework for the realization of loosely couple service oriented applications (SOA). Web services are central to the concept of SOC. They possess several benefits which are useful to SOA e.g. encapsulation, loose coupling and reusability. Using web services, an application can embed its functionalities within the business process of other applications. This is made possible through web service composition. Web services are composed to provide more complex functions for a service consumer in the form of a value added composite service. Currently, research into how web services can be composed to yield QoS (Quality of Service) optimal composite service has gathered significant attention. However, the number and services has risen thereby increasing the number of possible service combinations and also amplifying the impact of network on composite service performance. QoS-based service composition in the cloud addresses two important sub-problems; Prediction of network performance between web service nodes in the cloud, and QoS-based web service composition. We model the former problem as a prediction problem while the later problem is modelled as an NP-Hard optimization problem due to its complex, constrained and multi-objective nature. This thesis contributed to the prediction problem by presenting a novel learning automata-based non-negative matrix factorization algorithm (LANMF) for estimating end-to-end network latency of a composition in the cloud. LANMF encodes each web service node as an automaton which allows v it to estimate its network coordinate in such a way that prediction error is minimized. Experiments indicate that LANMF is more accurate than current approaches. The thesis also contributed to the QoS-based service composition problem by proposing four evolutionary algorithms; a network-aware genetic algorithm (INSGA), a K-mean based genetic algorithm (KNSGA), a multi-population particle swarm optimization algorithm (NMPSO), and a non-dominated sort fruit fly algorithm (NFOA). The algorithms adopt different evolutionary strategies coupled with LANMF method to search for low latency and QoSoptimal solutions. They also employ a unique constraint handling method used to penalize solutions that violate user specified QoS constraints. Experiments demonstrate the efficiency and scalability of the algorithms in a large scale environment. Also the algorithms outperform other evolutionary algorithms in terms of optimality and calability. In addition, the thesis contributed to QoS-based web service composition in a dynamic environment. This is motivated by the ineffectiveness of the four proposed algorithms in a dynamically hanging QoS environment such as a real world scenario. Hence, we propose a new cellular automata-based genetic algorithm (CellGA) to address the issue. Experimental results show the effectiveness of CellGA in solving QoS-based service composition in dynamic QoS environment

    Predictable execution of scientific workflows using advance resource reservations

    Get PDF
    Scientific Workflows are long-running and data intensive, and may encompass operations provided by multiple physically distributed service providers. The traditional approach to execute such workflows is to employ a single workflow engine which orchestrates the entire execution of a workflow instance, while being mostly agnostic about the state of the infrastructure it operates in (e.g., host or network load). Therefore, such centralized best-effort execution may use resources inefficiently -- for instance, repeatedly shipping large data volumes over slow network connections -- and cannot provide Quality of Service (QoS) guarantees. In particular, independent parallel executions might cause an overload of some resources, resulting in a performance degradation affecting all involved parties. In order to provide predictable behavior, we propose an approach where resources are managed proactively (i.e., reserved before being used), and where workflow execution is handled by multiple distributed and cooperating workflow engines. This allows to efficiently use the existing resources (for instance, using the most suitable provider for operations, and considering network locality for large data transfers) without overloading them, while at the same time providing predictability -- in terms of resource usage, execution timing, and cost -- for both service providers and customers. The contributions of this thesis are as follows. First, we present a system model which defines the concepts and operations required to formally represent a system where service providers are aware of the resource requirements of the operations they make available, and where (planned) workflow executions are adapted to the state of the infrastructure. Second, we describe our prototypical implementation of such a system, where a workflow execution comprises two main phases. In the planning phase, the resources to reserve for an upcoming workflow execution must be determined; this is realized using a Genetic Algorithm. We present conceptual and implementation details of the chromosome layout, and the fitness functions employed to plan executions according to one or more user-defined optimization goals. During the execution phase, the system must ensure that the actual resource usages abide to the reservations made. We present details on how such enforcement can be performed for various resource types. Third, we describe how these parts work together, and how the entire prototype system is deployed on an infrastructure based on WSDL/SOAP Web Services, UDDI Registries, and Glassfish Application Servers. Finally, we discuss the results of various evaluations, encompassing both the planning and runtime enforcement

    On the construction of decentralised service-oriented orchestration systems

    Get PDF
    Modern science relies on workflow technology to capture, process, and analyse data obtained from scientific instruments. Scientific workflows are precise descriptions of experiments in which multiple computational tasks are coordinated based on the dataflows between them. Orchestrating scientific workflows presents a significant research challenge: they are typically executed in a manner such that all data pass through a centralised computer server known as the engine, which causes unnecessary network traffic that leads to a performance bottleneck. These workflows are commonly composed of services that perform computation over geographically distributed resources, and involve the management of dataflows between them. Centralised orchestration is clearly not a scalable approach for coordinating services dispersed across distant geographical locations. This thesis presents a scalable decentralised service-oriented orchestration system that relies on a high-level data coordination language for the specification and execution of workflows. This system’s architecture consists of distributed engines, each of which is responsible for executing part of the overall workflow. It exploits parallelism in the workflow by decomposing it into smaller sub-workflows, and determines the most appropriate engines to execute them using computation placement analysis. This permits the workflow logic to be distributed closer to the services providing the data for execution, which reduces the overall data transfer in the workflow and improves its execution time. This thesis provides an evaluation of the presented system which concludes that decentralised orchestration provides scalability benefits over centralised orchestration, and improves the overall performance of executing a service-oriented workflow

    Automated Negotiation Among Web Services

    Get PDF
    Software as a service is well accepted software deployment and distribution model that is grown exponentially in the last few years. One of the biggest benefits of SaaS is the automated composition of these services in a composite system. It allows users to automatically find and bind these services, as to maximize the productivity of their composed systems, meeting both functional and non-functional requirements. In this paper we present a framework for modeling the dependency relationship of different Quality of Service parameters of a component service. Our proposed approach considers the different invocation patterns of component services in the system and models the dependency relationship for optimum values of these QoS parameters. We present a service composition framework that models the dependency relations ship among component services and uses the global QoS for service selection

    Supporting Collaboration in Mobile Environments

    Get PDF
    Continued rapid improvements in the hardware capabilities of mobile computing devices is driving a parallel need for a paradigm shift in software design for such devices with the aim of ushering in new classes of software applications for devices of the future. One such class of software application is collaborative applications that seem to reduce the burden and overhead of collaborations on human users by providing automated computational support for the more mundane and mechanical aspects of a cooperative effort. This dissertation addresses the research and software engineering questions associated with building a workflow-based collaboration system that can operate across mobile ad hoc networks, the most dynamic type of mobile networks that can function without dependence on any fixed external resources. While workflow management systems have been implemented for stable wired networks, the transition to a mobile network required the development of a knowledge management system for improving the predictability of the network topology, a mobility-aware specification language to specify workflows, and its accompanying algorithms that help automate key pieces of the software. In addition to details of the formulation, design, and implementation of the various algorithms and software components. this dissertation also describes the construction of a custom mobile workflow simulator that can be used to conduct simulation experiments that verify the effectiveness of the approaches presented in this document and beyond. Also presented are empirical results obtained using this simulator that show the effectiveness of the described approaches

    Improving Usability And Scalability Of Big Data Workflows In The Cloud

    Get PDF
    Big data workflows have recently emerged as the next generation of data-centric workflow technologies to address the five “V” challenges of big data: volume, variety, velocity, veracity, and value. More formally, a big data workflow is the computerized modeling and automation of a process consisting of a set of computational tasks and their data interdependencies to process and analyze data of ever increasing in scale, complexity, and rate of acquisition. The convergence of big data and workflows creates new challenges in workflow community. First, the variety of big data results in a need for integrating large number of remote Web services and other heterogeneous task components that can consume and produce data in various formats and models into a uniform and interoperable workflow. Existing approaches fall short in addressing the so-called shimming problem only in an adhoc manner and unable to provide a generic solution. We automatically insert a piece of code called shims or adaptors in order to resolve the data type mismatches. Second, the volume of big data results in a large number of datasets that needs to be queried and analyzed in an effective and personalized manner. Further, there is also a strong need for sharing, reusing, and repurposing existing tasks and workflows across different users and institutes. To overcome such limitations, we propose a folksonomy- based social workflow recommendation system to improve workflow design productivity and efficient dataset querying and analyzing. Third, the volume of big data results in the need to process and analyze data of ever increasing in scale, complexity, and rate of acquisition. But a scalable distributed data model is still missing that abstracts and automates data distribution, parallelism, and scalable processing. We propose a NoSQL collectional data model that addresses this limitation. Finally, the volume of big data combined with the unbound resource leasing capability foreseen in the cloud, facilitates data scientists to wring actionable insights from the data in a time and cost efficient manner. We propose BARENTS scheduler that supports high-performance workflow scheduling in a heterogeneous cloud-computing environment with a single objective to minimize the workflow makespan under a user provided budget constraint
    corecore