16 research outputs found

    Full Solution Indexing and Efficient Compressed Graph Representation for Web Service Composition

    Get PDF
    Service-oriented computing enhances business scalability and flexibility; providers who expect to benefit from it may bring explosive growth of web services. Searching an optimal composition solution with both functional and non-functional requirements is a computationally demanding problem: the time and space requirements may be infeasible due to the high number of available services. In this thesis, we study QoS-aware service composition problems which satisfy functional requirements as well as non-functional requirements. We use optimization algorithms to enhance accuracy of our searching algorithms. In the first approach, we propose a database-based approach to search a service composition solution. Current in-memory methods are limited by expensive and volatile physical memory, to deal with this problem, we want to use the large space available in relational database on persistence disk. In our database-based approach, all possible service combinations are generated beforehand and stored in a relational database. When a user request comes, SQL queries are composed to search in the database and K best solutions are returned. We test the performance of the proposed approach with a service challenge data set; experiment results demonstrate that this approach can always successfully find top-K valid solutions.We offer three main contributions in this approach. First, we overcome the disadvantages of in-memory composition algorithms, such as volatile and expensive, and provide a solution suitable to cloud environments. Second, we fetch top-K solutions in case the optimal solution is not available as backup solutions to the user. Third, compared with other pre-computing composition methods, we use a single SQL query: there is no need to eliminate spurious services iteratively. Then, we propose the application of a skyline operator to reduce the search space and improve the scalability. Skyline analysis returns all of the elements that are not dominated by another element. We use skyline analysis to find a set of candidate services referred to as "skyline services", therefore, less competitive services are reduced. This allows us to find a solution for a large composition problem with less storage and increased speed. In reality, different users may have same requests, we are motivated to pick some popular requests and generate paths for fast delivery. These paths are stored in a separate table of the relational database. When a user request comes, we first search to find a nearly ready-made solution. Only as a last resort do we search the table with whole paths to find a solution. Finally, to deal with the problem that the search space may explore, we apply a compressed data structure to represent the service composition graph. The goal is to allow algorithms running in in-memory over larger graphs. In this approach, we use compact K2-trees to represent the service composition graph. When a user request comes, we search the K2-tree for a satisfactory solution. We use an array to store values in the last level of the compact tree, which represents relationships between services and concepts. In our algorithms, we find services' inputs (resp. outputs) by locating elements in this array directly, therefore, decompressing the graph is unnecessary. To the best of our knowledge, our work is the first attempt to consider compact structure in solving web service composition problems. Experiment results demonstrate that this approach takes less space and has good scalability when handling a large number of web services. We provide different approaches to search a solution for the user. If the user want to find an optimal solution with fewer services, he may use the database-based approach to search for a solution. If the user want to get a solution in a short time, he may choose the in-memory approach

    Composition of Web Services Using Markov Decision Processes and Dynamic Programming

    Get PDF
    We propose a Markov decision process model for solving the Web service composition (WSC) problem. Iterative policy evaluation, value iteration, and policy iteration algorithms are used to experimentally validate our approach, with artificial and real data. The experimental results show the reliability of the model and the methods employed, with policy iteration being the best one in terms of the minimum number of iterations needed to estimate an optimal policy, with the highest Quality of Service attributes. Our experimental work shows how the solution of a WSC problem involving a set of 100,000 individual Web services and where a valid composition requiring the selection of 1,000 services from the available set can be computed in the worst case in less than 200 seconds, using an Intel Core i5 computer with 6 GB RAM. Moreover, a real WSC problem involving only 7 individual Web services requires less than 0.08 seconds, using the same computational power. Finally, a comparison with two popular reinforcement learning algorithms, sarsa and Q-learning, shows that these algorithms require one or two orders of magnitude and more time than policy iteration, iterative policy evaluation, and value iteration to handle WSC problems of the same complexity

    Context Verification and Adaptation in Web Service Composition

    Get PDF
    Automatic web-service composition aims at automating the design of an appropriate combination of existing web services to achieve a global goal. Most proposed AWSC approaches only consider input/output parameters and quality features of services. However, most real-world web services have applicable conditions and require constraints to be considered according to the execution context of composite services. Constraint verification has a significant impact on the composition and execution of composite services. In particular, runtime verification of service constraints can result in the failure of the execution of composite services and eventually waste computational resources and may incur monetary costs. In addition, traditional adaptation approaches for web service composition consider recovery in case of failure when a service becomes unavailable. They do not take into account changes and limitations in service execution environment which potentially can affect the execution of a wide range of services. Externally-defined constraints are likely to be defined and become or cease to be applicable after the composite service has been deployed. In this thesis, we propose a novel approach to model and verify different types of constraints inside composite services. We not only consider input/output parameters but also the values that can be assigned to parameters during design and execution of composite services. In addition, we provide novel failure recovery and adaptation approaches for different types of constraints according to the execution context of composite services. In our solution, we develop a new structure including alternative composite services to recover broken composite services and adapt to external constraints. We finally propose a brokerage architecture including all proposed approaches for constraint-aware service composition and adaptation

    Towards personalised and adaptive QoS assessments via context awareness

    Get PDF
    Quality of Service (QoS ) properties play an important role in distinguishing between functionally-equivalent services and accommodating the different expectations of users. However, the subjective nature of some properties and the dynamic and unreliable nature of service environments may result in cases where the quality values advertised by the service provider are either missing or untrustworthy. To tackle this, a number of QoS estimation approaches have been proposed, utilising the observation history available on a service to predict its performance. Although the context underlying such previous observations (and corresponding to both user and service related factors) could provide an important source of information for the QoS estimation process, it has only been utilised to a limited extent by existing approaches. In response, we propose a context-aware quality learning model, realised via a learning-enabled service agent, exploiting the contextual characteristics of the domain in order to provide more personalised, accurate and relevant quality estimations for the situation at hand. The experiments conducted demonstrate the effectiveness of the proposed approach, showing promising results (in terms of prediction accuracy) in different types of changing service environments

    A service composition platform in cloud computing using mobile devices for smart shopping

    Get PDF
    The development of the Next Generation Networks (NGN) such as LTE, WiMax and 5G networks has resulted in the development of more diverse mobile services. Many voice and video services have been developed (e.g. Viber, Skype and WhatsApp). Social networking sites have also been developed (e.g. Facebook, Instagram and Twitter). Users of these services are increasingly expecting and demanding more complex services which have more capabilities that can improve their day to day business. Users want services that are reliable, fast and easy to use. To effectively design and implement services, Service Oriented Architecture (SOA) principles are useful and some of the advantages of designing services using SOA principles are: • Improved interoperability; • Cross platform and cross application integration; • Reusability; • Service composition. Service composition has the advantage that customized services with more features can be developed by combining two or more basic services. In this research, SOA principles are used to design a cloud based Mobile Smart Shopping Service Platform. Canal Walk Shopping Mall, which is located in Cape Town, South Africa is used as a case study. Various mobile services are composed in order to solve the problem of getting information about the services provided by the shopping mall and also to show the available parking bays, which has become a major concern due to the rapid growth of the surrounding residential and business areas. Performance measurements for the Smart Shopping service are then conducted to test its power consumption, memory usage, bandwidth usage and application timeline. Conclusions are drawn and recommendations for possible future development are then provided

    Semantic search and composition in unstructured peer-to-peer networks

    Get PDF
    This dissertation focuses on several research questions in the area of semantic search and composition in unstructured peer-to-peer (P2P) networks. Going beyond the state of the art, the proposed semantic-based search strategy S2P2P offers a novel path-suggestion based query routing mechanism, providing a reasonable tradeoff between search performance and network traffic overhead. In addition, the first semantic-based data replication scheme DSDR is proposed. It enables peers to use semantic information to select replica numbers and target peers to address predicted future demands. With DSDR, k-random search can achieve better precision and recall than it can with a near-optimal non-semantic replication strategy. Further, this thesis introduces a functional automatic semantic service composition method, SPSC. Distinctively, it enables peers to jointly compose complex workflows with high cumulative recall but low network traffic overhead, using heuristic-based bidirectional haining and service memorization mechanisms. Its query branching method helps to handle dead-ends in a pruned search space. SPSC is proved to be sound and a lower bound of is completeness is given. Finally, this thesis presents iRep3D for semantic-index based 3D scene selection in P2P search. Its efficient retrieval scales to answer hybrid queries involving conceptual, functional and geometric aspects. iRep3D outperforms previous representative efforts in terms of search precision and efficiency.Diese Dissertation bearbeitet Forschungsfragen zur semantischen Suche und Komposition in unstrukturierten Peer-to-Peer Netzen(P2P). Die semantische Suchstrategie S2P2P verwendet eine neuartige Methode zur Anfrageweiterleitung basierend auf Pfadvorschlägen, welche den Stand der Wissenschaft übertrifft. Sie bietet angemessene Balance zwischen Suchleistung und Kommunikationsbelastung im Netzwerk. Außerdem wird das erste semantische System zur Datenreplikation genannt DSDR vorgestellt, welche semantische Informationen berücksichtigt vorhergesagten zukünftigen Bedarf optimal im P2P zu decken. Hierdurch erzielt k-random-Suche bessere Präzision und Ausbeute als mit nahezu optimaler nicht-semantischer Replikation. SPSC, ein automatisches Verfahren zur funktional korrekten Komposition semantischer Dienste, ermöglicht es Peers, gemeinsam komplexe Ablaufpläne zu komponieren. Mechanismen zur heuristischen bidirektionalen Verkettung und Rückstellung von Diensten ermöglichen hohe Ausbeute bei geringer Belastung des Netzes. Eine Methode zur Anfrageverzweigung vermeidet das Feststecken in Sackgassen im beschnittenen Suchraum. Beweise zur Korrektheit und unteren Schranke der Vollständigkeit von SPSC sind gegeben. iRep3D ist ein neuer semantischer Selektionsmechanismus für 3D-Modelle in P2P. iRep3D beantwortet effizient hybride Anfragen unter Berücksichtigung konzeptioneller, funktionaler und geometrischer Aspekte. Der Ansatz übertrifft vorherige Arbeiten bezüglich Präzision und Effizienz

    Semantic search and composition in unstructured peer-to-peer networks

    Get PDF
    This dissertation focuses on several research questions in the area of semantic search and composition in unstructured peer-to-peer (P2P) networks. Going beyond the state of the art, the proposed semantic-based search strategy S2P2P offers a novel path-suggestion based query routing mechanism, providing a reasonable tradeoff between search performance and network traffic overhead. In addition, the first semantic-based data replication scheme DSDR is proposed. It enables peers to use semantic information to select replica numbers and target peers to address predicted future demands. With DSDR, k-random search can achieve better precision and recall than it can with a near-optimal non-semantic replication strategy. Further, this thesis introduces a functional automatic semantic service composition method, SPSC. Distinctively, it enables peers to jointly compose complex workflows with high cumulative recall but low network traffic overhead, using heuristic-based bidirectional haining and service memorization mechanisms. Its query branching method helps to handle dead-ends in a pruned search space. SPSC is proved to be sound and a lower bound of is completeness is given. Finally, this thesis presents iRep3D for semantic-index based 3D scene selection in P2P search. Its efficient retrieval scales to answer hybrid queries involving conceptual, functional and geometric aspects. iRep3D outperforms previous representative efforts in terms of search precision and efficiency.Diese Dissertation bearbeitet Forschungsfragen zur semantischen Suche und Komposition in unstrukturierten Peer-to-Peer Netzen(P2P). Die semantische Suchstrategie S2P2P verwendet eine neuartige Methode zur Anfrageweiterleitung basierend auf Pfadvorschlägen, welche den Stand der Wissenschaft übertrifft. Sie bietet angemessene Balance zwischen Suchleistung und Kommunikationsbelastung im Netzwerk. Außerdem wird das erste semantische System zur Datenreplikation genannt DSDR vorgestellt, welche semantische Informationen berücksichtigt vorhergesagten zukünftigen Bedarf optimal im P2P zu decken. Hierdurch erzielt k-random-Suche bessere Präzision und Ausbeute als mit nahezu optimaler nicht-semantischer Replikation. SPSC, ein automatisches Verfahren zur funktional korrekten Komposition semantischer Dienste, ermöglicht es Peers, gemeinsam komplexe Ablaufpläne zu komponieren. Mechanismen zur heuristischen bidirektionalen Verkettung und Rückstellung von Diensten ermöglichen hohe Ausbeute bei geringer Belastung des Netzes. Eine Methode zur Anfrageverzweigung vermeidet das Feststecken in Sackgassen im beschnittenen Suchraum. Beweise zur Korrektheit und unteren Schranke der Vollständigkeit von SPSC sind gegeben. iRep3D ist ein neuer semantischer Selektionsmechanismus für 3D-Modelle in P2P. iRep3D beantwortet effizient hybride Anfragen unter Berücksichtigung konzeptioneller, funktionaler und geometrischer Aspekte. Der Ansatz übertrifft vorherige Arbeiten bezüglich Präzision und Effizienz

    Global Sensor Web Coordination and Control Using Multi-agent Systems

    Get PDF
    corecore