474 research outputs found

    Generating mock skeletons for lightweight Web service testing : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Science at Massey University, Manawatū New Zealand

    Get PDF
    Modern application development allows applications to be composed using lightweight HTTP services. Testing such an application requires the availability of services that the application makes requests to. However, continued access to dependent services during testing may be restrained, making adequate testing a significant and non-trivial engineering challenge. The concept of Service Virtualisation is gaining popularity for testing such applications in isolation. It is a practise to simulate the behaviour of dependent services by synthesising responses using semantic models inferred from recorded traffic. Replacing services with their respective mocks is, therefore, useful to address their absence and move on application testing. In reality, however, it is unlikely that fully automated service virtualisation solutions can produce highly accurate proxies. Therefore, we recommend using service virtualisation to infer some attributes of HTTP service responses. We further acknowledge that engineers often want to fine-tune this. This requires algorithms to produce readily interpretable and customisable results. We assume that if service virtualisation is based on simple logical rules, engineers would have the potential to understand and customise rules. In this regard, Symbolic Machine Learning approaches can be investigated because of the high provenance of their results. Accordingly, this thesis examines the appropriateness of symbolic machine learning algorithms to automatically synthesise HTTP services' mock skeletons from network traffic recordings. We consider four commonly used symbolic techniques: the C4.5 decision tree algorithm, the RIPPER and PART rule learners, and the OCEL description logic learning algorithm. The experiments are performed employing network traffic datasets extracted from a few different successful, large-scale HTTP services. The experimental design further focuses on the generation of reproducible results. The chosen algorithms demonstrate the suitability of training highly accurate and human-readable semantic models for predicting the key aspects of HTTP service responses, such as the status and response headers. Having human-readable logics would make interpretation of the response properties simpler. These mock skeletons can then be easily customised to create mocks that can generate service responses suitable for testing

    Towards automated composition of convergent services: a survey

    Get PDF
    A convergent service is defined as a service that exploits the convergence of communication networks and at the same time takes advantage of features of the Web. Nowadays, building up a convergent service is not trivial, because although there are significant approaches that aim to automate the service composition at different levels in the Web and Telecom domains, selecting the most appropriate approach for specific case studies is complex due to the big amount of involved information and the lack of technical considerations. Thus, in this paper, we identify the relevant phases for convergent service composition and explore the existing approaches and their associated technologies for automating each phase. For each technology, the maturity and results are analysed, as well as the elements that must be considered prior to their application in real scenarios. Furthermore, we provide research directions related to the convergent service composition phases

    Semantic Web Enabled Software Engineering

    Get PDF
    Ontologies allow the capture and sharing of domain knowledge by formalizing information and making it machine understandable. As part of an information system, ontologies can capture and carry the reasoning knowledge needed to fulfill different application goals. Although many ontologies have been developed over recent years, few include such reasoning information. As a result, many ontologies are not used in real-life applications, do not get reused or only act as a taxonomy of a domain. This work is an investigation into the practical use of ontologies as a driving factor in the development of applications and the incorporation of Knowledge Engineering as a meaningful activity into modern agile software development. This thesis contributes a novel methodology that supports an incremental requirement analysis and an iterative formalization of ontology design through the use of ontology reasoning patterns. It also provides an application model for ontology-driven applications that can deal with nonontological data sources. A set of case studies with various application specific goals helps to elucidate whether ontologies are in fact suitable for more than simple knowledge formalization and sharing, and can act as the underlying structure for developing largescale information systems. Tasks from the area of bug-tracker quality mining and clone detection are evaluated for this purpose

    Engineering adaptive web applications

    Get PDF
    [no abstract

    Web Engineering for Workflow-based Applications: Models, Systems and Methodologies

    Get PDF
    This dissertation presents novel solutions for the construction of Workflow-based Web applications: The Web Engineering DSL Framework, a stakeholder-oriented Web Engineering methodology based on Domain-Specific Languages; the Workflow DSL for the efficient engineering of Web-based Workflows with strong stakeholder involvement; the Dialog DSL for the usability-oriented development of advanced Web-based dialogs; the Web Engineering Reuse Sphere enabling holistic, stakeholder-oriented reuse

    Thinking outside the TBox multiparty service matchmaking as information retrieval

    Get PDF
    Service oriented computing is crucial to a large and growing number of computational undertakings. Central to its approach are the open and network-accessible services provided by many different organisations, and which in turn enable the easy creation of composite workflows. This leads to an environment containing many thousands of services, in which a programmer or automated composition system must discover and select services appropriate for the task at hand. This discovery and selection process is known as matchmaking. Prior work in the field has conceived the problem as one of sufficiently describing individual services using formal, symbolic knowledge representation languages. We review the prior work, and present arguments for why it is optimistic to assume that this approach will be adequate by itself. With these issues in mind, we examine how, by reformulating the task and giving the matchmaker a record of prior service performance, we can alleviate some of the problems. Using two formalisms—the incidence calculus and the lightweight coordination calculus—along with algorithms inspired by information retrieval techniques, we evolve a series of simple matchmaking agents that learn from experience how to select those services which performed well in the past, while making minimal demands on the service users. We extend this mechanism to the overlooked case of matchmaking in workflows using multiple services, selecting groups of services known to inter-operate well. We examine the performance of such matchmakers in possible future services environments, and discuss issues in applying such techniques in large-scale deployments

    Logic-based Technologies for Multi-agent Systems: A Systematic Literature Review

    Get PDF
    Precisely when the success of artificial intelligence (AI) sub-symbolic techniques makes them be identified with the whole AI by many non-computerscientists and non-technical media, symbolic approaches are getting more and more attention as those that could make AI amenable to human understanding. Given the recurring cycles in the AI history, we expect that a revamp of technologies often tagged as “classical AI” – in particular, logic-based ones will take place in the next few years. On the other hand, agents and multi-agent systems (MAS) have been at the core of the design of intelligent systems since their very beginning, and their long-term connection with logic-based technologies, which characterised their early days, might open new ways to engineer explainable intelligent systems. This is why understanding the current status of logic-based technologies for MAS is nowadays of paramount importance. Accordingly, this paper aims at providing a comprehensive view of those technologies by making them the subject of a systematic literature review (SLR). The resulting technologies are discussed and evaluated from two different perspectives: the MAS and the logic-based ones

    Organization based multiagent architecture for distributed environments

    Get PDF
    [EN]Distributed environments represent a complex field in which applied solutions should be flexible and include significant adaptation capabilities. These environments are related to problems where multiple users and devices may interact, and where simple and local solutions could possibly generate good results, but may not be effective with regards to use and interaction. There are many techniques that can be employed to face this kind of problems, from CORBA to multi-agent systems, passing by web-services and SOA, among others. All those methodologies have their advantages and disadvantages that are properly analyzed in this documents, to finally explain the new architecture presented as a solution for distributed environment problems. The new architecture for solving complex solutions in distributed environments presented here is called OBaMADE: Organization Based Multiagent Architecture for Distributed Environments. It is a multiagent architecture based on the organizations of agents paradigm, where the agents in the architecture are structured into organizations to improve their organizational capabilities. The reasoning power of the architecture is based on the Case-Based Reasoning methology, being implemented in a internal organization that uses agents to create services to solve the external request made by the users. The OBaMADE architecture has been successfully applied to two different case studies where its prediction capabilities have been properly checked. Those case studies have showed optimistic results and, being complex systems, have demonstrated the abstraction and generalizations capabilities of the architecture. Nevertheless OBaMADE is intended to be able to solve much other kind of problems in distributed environments scenarios. It should be applied to other varieties of situations and to other knowledge fields to fully develop its potencial.[ES]Los entornos distribuidos representan un campo de conocimiento complejo en el que las soluciones a aplicar deben ser flexibles y deben contar con gran capacidad de adaptación. Este tipo de entornos está normalmente relacionado con problemas donde varios usuarios y dispositivos entran en juego. Para solucionar dichos problemas, pueden utilizarse sistemas locales que, aunque ofrezcan buenos resultados en términos de calidad de los mismos, no son tan efectivos en cuanto a la interacción y posibilidades de uso. Existen múltiples técnicas que pueden ser empleadas para resolver este tipo de problemas, desde CORBA a sistemas multiagente, pasando por servicios web y SOA, entre otros. Todas estas mitologías tienen sus ventajas e inconvenientes, que se analizan en este documento, para explicar, finalmente, la nueva arquitectura presentada como una solución para los problemas generados en entornos distribuidos. La nueva arquitectura aquí se llama OBaMADE, que es el acrónimo del inglés Organization Based Multiagent Architecture for Distributed Environments (Arquitectura Multiagente Basada en Organizaciones para Entornos Distribuidos). Se trata de una arquitectura multiagente basasa en el paradigma de las organizaciones de agente, donde los agentes que forman parte de la arquitectura se estructuran en organizaciones para mejorar sus capacidades organizativas. La capacidad de razonamiento de la arquitectura está basada en la metodología de razonamiento basado en casos, que se ha implementado en una de las organizaciones internas de la arquitectura por medio de agentes que crean servicios que responden a las solicitudes externas de los usuarios. La arquitectura OBaMADE se ha aplicado de forma exitosa a dos casos de estudio diferentes, en los que se han demostrado sus capacidades predictivas. Aplicando OBaMADE a estos casos de estudio se han obtenido resultados esperanzadores y, al ser sistemas complejos, se han demostrado las capacidades tanto de abstracción como de generalización de la arquitectura presentada. Sin embargo, esta arquitectura está diseñada para poder ser aplicada a más tipo de problemas de entornos distribuidos. Debe ser aplicada a más variadas situaciones y a otros campos de conocimiento para desarrollar completamente el potencial de esta arquitectura
    corecore