38 research outputs found

    Semantics-aware planning methodology for automatic web service composition

    Get PDF
    Service-Oriented Computing (SOC) has been a major research topic in the past years. It is based on the idea of composing distributed applications even in heterogeneous environments by discovering and invoking network-available Web Services to accomplish some complex tasks when no existing service can satisfy the user request. Service-Oriented Architecture (SOA) is a key design principle to facilitate building of these autonomous, platform-independent Web Services. However, in distributed environments, the use of services without considering their underlying semantics, either functional semantics or quality guarantees can negatively affect a composition process by raising intermittent failures or leading to slow performance. More recently, Artificial Intelligence (AI) Planning technologies have been exploited to facilitate the automated composition. But most of the AI planning based algorithms do not scale well when the number of Web Services increases, and there is no guarantee that a solution for a composition problem will be found even if it exists. AI Planning Graph tries to address various limitations in traditional AI planning by providing a unique search space in a directed layered graph. However, the existing AI Planning Graph algorithm only focuses on finding complete solutions without taking account of other services which are not achieving the goals. It will result in the failure of creating such a graph in the case that many services are available, despite most of them being irrelevant to the goals. This dissertation puts forward a concept of building a more intelligent planning mechanism which should be a combination of semantics-aware service selection and a goal-directed planning algorithm. Based on this concept, a new planning system so-called Semantics Enhanced web service Mining (SEwsMining) has been developed. Semantic-aware service selection is achieved by calculating on-demand multi-attributes semantics similarity based on semantic annotations (QWSMO-Lite). The planning algorithm is a substantial revision of the AI GraphPlan algorithm. To reduce the size of planning graph, a bi-directional planning strategy has been developed

    A survey of QoS-aware web service composition techniques

    Get PDF
    Web service composition can be briefly described as the process of aggregating services with disparate functionalities into a new composite service in order to meet increasingly complex needs of users. Service composition process has been accurate on dealing with services having disparate functionalities, however, over the years the number of web services in particular that exhibit similar functionalities and varying Quality of Service (QoS) has significantly increased. As such, the problem becomes how to select appropriate web services such that the QoS of the resulting composite service is maximized or, in some cases, minimized. This constitutes an NP-hard problem as it is complicated and difficult to solve. In this paper, a discussion of concepts of web service composition and a holistic review of current service composition techniques proposed in literature is presented. Our review spans several publications in the field that can serve as a road map for future research

    Composition of Web Services Using Markov Decision Processes and Dynamic Programming

    Get PDF
    We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity

    Context Verification and Adaptation in Web Service Composition

    Get PDF
    Automatic web-service composition aims at automating the design of an appropriate combination of existing web services to achieve a global goal. Most proposed AWSC approaches only consider input/output parameters and quality features of services. However, most real-world web services have applicable conditions and require constraints to be considered according to the execution context of composite services. Constraint verification has a significant impact on the composition and execution of composite services. In particular, runtime verification of service constraints can result in the failure of the execution of composite services and eventually waste computational resources and may incur monetary costs. In addition, traditional adaptation approaches for web service composition consider recovery in case of failure when a service becomes unavailable. They do not take into account changes and limitations in service execution environment which potentially can affect the execution of a wide range of services. Externally-defined constraints are likely to be defined and become or cease to be applicable after the composite service has been deployed. In this thesis, we propose a novel approach to model and verify different types of constraints inside composite services. We not only consider input/output parameters but also the values that can be assigned to parameters during design and execution of composite services. In addition, we provide novel failure recovery and adaptation approaches for different types of constraints according to the execution context of composite services. In our solution, we develop a new structure including alternative composite services to recover broken composite services and adapt to external constraints. We finally propose a brokerage architecture including all proposed approaches for constraint-aware service composition and adaptation

    Full Solution Indexing and Efficient Compressed Graph Representation for Web Service Composition

    Get PDF
    Service-oriented computing enhances business scalability and flexibility; providers who expect to benefit from it may bring explosive growth of web services. Searching an optimal composition solution with both functional and non-functional requirements is a computationally demanding problem: the time and space requirements may be infeasible due to the high number of available services. In this thesis, we study QoS-aware service composition problems which satisfy functional requirements as well as non-functional requirements. We use optimization algorithms to enhance accuracy of our searching algorithms. In the first approach, we propose a database-based approach to search a service composition solution. Current in-memory methods are limited by expensive and volatile physical memory, to deal with this problem, we want to use the large space available in relational database on persistence disk. In our database-based approach, all possible service combinations are generated beforehand and stored in a relational database. When a user request comes, SQL queries are composed to search in the database and K best solutions are returned. We test the performance of the proposed approach with a service challenge data set; experiment results demonstrate that this approach can always successfully find top-K valid solutions.We offer three main contributions in this approach. First, we overcome the disadvantages of in-memory composition algorithms, such as volatile and expensive, and provide a solution suitable to cloud environments. Second, we fetch top-K solutions in case the optimal solution is not available as backup solutions to the user. Third, compared with other pre-computing composition methods, we use a single SQL query: there is no need to eliminate spurious services iteratively. Then, we propose the application of a skyline operator to reduce the search space and improve the scalability. Skyline analysis returns all of the elements that are not dominated by another element. We use skyline analysis to find a set of candidate services referred to as "skyline services", therefore, less competitive services are reduced. This allows us to find a solution for a large composition problem with less storage and increased speed. In reality, different users may have same requests, we are motivated to pick some popular requests and generate paths for fast delivery. These paths are stored in a separate table of the relational database. When a user request comes, we first search to find a nearly ready-made solution. Only as a last resort do we search the table with whole paths to find a solution. Finally, to deal with the problem that the search space may explore, we apply a compressed data structure to represent the service composition graph. The goal is to allow algorithms running in in-memory over larger graphs. In this approach, we use compact K2-trees to represent the service composition graph. When a user request comes, we search the K2-tree for a satisfactory solution. We use an array to store values in the last level of the compact tree, which represents relationships between services and concepts. In our algorithms, we find services' inputs (resp. outputs) by locating elements in this array directly, therefore, decompressing the graph is unnecessary. To the best of our knowledge, our work is the first attempt to consider compact structure in solving web service composition problems. Experiment results demonstrate that this approach takes less space and has good scalability when handling a large number of web services. We provide different approaches to search a solution for the user. If the user want to find an optimal solution with fewer services, he may use the database-based approach to search for a solution. If the user want to get a solution in a short time, he may choose the in-memory approach

    Intelligent maintenance management in a reconfigurable manufacturing environment using multi-agent systems

    Get PDF
    Thesis (M. Tech.) -- Central University of Technology, Free State, 2010Traditional corrective maintenance is both costly and ineffective. In some situations it is more cost effective to replace a device than to maintain it; however it is far more likely that the cost of the device far outweighs the cost of performing routine maintenance. These device related costs coupled with the profit loss due to reduced production levels, makes this reactive maintenance approach unacceptably inefficient in many situations. Blind predictive maintenance without considering the actual physical state of the hardware is an improvement, but is still far from ideal. Simply maintaining devices on a schedule without taking into account the operational hours and workload can be a costly mistake. The inefficiencies associated with these approaches have contributed to the development of proactive maintenance strategies. These approaches take the device health state into account. For this reason, proactive maintenance strategies are inherently more efficient compared to the aforementioned traditional approaches. Predicting the health degradation of devices allows for easier anticipation of the required maintenance resources and costs. Maintenance can also be scheduled to accommodate production needs. This work represents the design and simulation of an intelligent maintenance management system that incorporates device health prognosis with maintenance schedule generation. The simulation scenario provided prognostic data to be used to schedule devices for maintenance. A production rule engine was provided with a feasible starting schedule. This schedule was then improved and the process was determined by adhering to a set of criteria. Benchmarks were conducted to show the benefit of optimising the starting schedule and the results were presented as proof. Improving on existing maintenance approaches will result in several benefits for an organisation. Eliminating the need to address unexpected failures or perform maintenance prematurely will ensure that the relevant resources are available when they are required. This will in turn reduce the expenditure related to wasted maintenance resources without compromising the health of devices or systems in the organisation

    Autonomic Business Processes

    Get PDF
    Business processes in large organisations are typically poorly understood and complex in structure. Adapting such a business process to changing internal and external conditions requires costly and time consuming investigative work and change management. In contrast autonomic systems are able to adapt to changing environments and continue to function without external intervention. Enabling business processes to adapt to changing conditions in the same way would be extremely valuable. This work investigates the potential to self-heal individual business process executions in generic business processes. Classical and Immune-inspired classification algorithms are tested for their predictive utility with Decision Trees augmented with MetaCost and Immunos 99 exhibiting the best performance respectively. An approach to deriving recovery strategies from historical process data in the absence of a process model is presented and tested for suitability. Also presented is an approach to selecting the best of the determined recovery strategies for application to a business process execution, which is then tested to determine the impact of its parameters on the quality of selected recoveries

    Development of a Response Planner Using the UCT Algorithm for Cyber Defense

    Get PDF
    A need for a quick response to cyber attacks is a prevalent problem for computer network operators today. There is a small window to respond to a cyber attack when it occurs to prevent significant damage to a computer network. Automated response planners offer one solution to resolve this issue. This work presents Network Defense Planner System (NDPS), a planner dependent on the effectiveness of the detection of the cyber attack. This research first explores making classification of network attacks faster for real-time detection, the basic function Intrusion Detection System (IDS) provides. After identifying the type of attack, learning the rewards to use in the NDPS is the second important area of this research. For NDPS to assemble the optimal plan, learning the rewards for resulting network states is critical and often depends on the preferences of the network operator. Using neural networks, the second area of this research demonstrates that capturing the preferences through samples is feasible. After training the neural network, a model can be created to obtain reward estimates. The research performed in these two areas complement the final portion of the research which is assembling the optimal plan through using the Upper Bounds on Confidence for Trees (UCT) algorithm. NDPS is implemented using the UCT algorithm which allows for quick plan formulation by searching through predicted network states based on available network actions. UCT can effectively create a plan quickly and is guaranteed to provide the optimal plan, according to rewards used, if enough time is allotted. NDPS is tested against eight random attack scenarios. For each attack scenario, the plan is polled at specific time intervals to test how quickly the optimal plan can be formulated. Results demonstrate the feasibility of NDPS to be used in real world scenarios since the optimal plans for each attack type can be formulated in real-time allowing for a rapid system response

    Repairing web service compositions based on planning graph

    Get PDF
    With the increasing acceptance of service-oriented computing, a growing area of study is the way to reuse the loosely coupled Web services, distributed throughout the Internet, to fulfill business goals in an automated fashion. When the goals cannot be satisfied by a single Web service, a chain of Web services can work together as a "composition" to satisfy the needs. The problem of finding composition plans to satisfy given requests is referred to as the Web service composition problem. In recent years, many studies have been done in this area, and various approaches have been proposed. However, most existing proposals endorse a static viewpoint over Web service composition; while in the real world, change is the rule rather than an exception. Web services may appear and disappear at any time in a non-predictable way. Therefore, valid composition plans may suddenly become invalid due to the environment changes in the business world. In this thesis, techniques to support reparation for an existing plan as a reaction to environment changes are proposed. Approaches of repair are compared to ones of re-planning, with particular attention to the time and quality of both approaches. It will be argued that the approach advocated in this thesis is a viable solution to improve the adaptation of automated Web service composition processes in the context of the real world

    Hybrid Mission Planning with Coalition Formation

    Get PDF
    corecore