104 research outputs found

    ServeNet: A Deep Neural Network for Web Services Classification

    Full text link
    Automated service classification plays a crucial role in service discovery, selection, and composition. Machine learning has been widely used for service classification in recent years. However, the performance of conventional machine learning methods highly depends on the quality of manual feature engineering. In this paper, we present a novel deep neural network to automatically abstract low-level representation of both service name and service description to high-level merged features without feature engineering and the length limitation, and then predict service classification on 50 service categories. To demonstrate the effectiveness of our approach, we conduct a comprehensive experimental study by comparing 10 machine learning methods on 10,000 real-world web services. The result shows that the proposed deep neural network can achieve higher accuracy in classification and more robust than other machine learning methods.Comment: Accepted by ICWS'2

    Size Matters: Microservices Research and Applications

    Full text link
    In this chapter we offer an overview of microservices providing the introductory information that a reader should know before continuing reading this book. We introduce the idea of microservices and we discuss some of the current research challenges and real-life software applications where the microservice paradigm play a key role. We have identified a set of areas where both researcher and developer can propose new ideas and technical solutions.Comment: arXiv admin note: text overlap with arXiv:1706.0735

    DATESSO: Self-Adapting Service Composition with Debt-Aware Two Levels Constraint Reasoning

    Full text link
    The rapidly changing workload of service-based systems can easily cause under-/over-utilization on the component services, which can consequently affect the overall Quality of Service (QoS), such as latency. Self-adaptive services composition rectifies this problem, but poses several challenges: (i) the effectiveness of adaptation can deteriorate due to over-optimistic assumptions on the latency and utilization constraints, at both local and global levels; and (ii) the benefits brought by each composition plan is often short term and is not often designed for long-term benefits -- a natural prerequisite for sustaining the system. To tackle these issues, we propose a two levels constraint reasoning framework for sustainable self-adaptive services composition, called DATESSO. In particular, DATESSO consists of a re ned formulation that differentiates the "strictness" for latency/utilization constraints in two levels. To strive for long-term benefits, DATESSO leverages the concept of technical debt and time-series prediction to model the utility contribution of the component services in the composition. The approach embeds a debt-aware two level constraint reasoning algorithm in DATESSO to improve the efficiency, effectiveness and sustainability of self-adaptive service composition. We evaluate DATESSO on a service-based system with real-world WS-DREAM dataset and comparing it with other state-of-the-art approaches. The results demonstrate the superiority of DATESSO over the others on the utilization, latency and running time whilst likely to be more sustainable.Comment: Accepted to the SEAMS '20. Please use the following citation: Satish Kumar, Tao Chen, Rami Bahsoon, and Rajkumar Buyya. DATESSO: Self-Adapting Service Composition with Debt-Aware Two Levels Constraint Reasoning. In IEEE/ACM 15th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, Oct 7-8, 2020, Seoul, Kore

    Probabilistic analysis of QoS-aware service composition with Explicit Environment Models

    Get PDF
    Service composition is one of the primary ways to provide value-added services on the Internet. Quality-of-Service (QoS) represents a crucial indicator for the underlying composition policy adoption, but it is highly influenced by various environmental factors. Existing composition strategies rarely take the influence of environment into consideration explicitly, which may lead to sub-optimal composition policies in a dynamic environment. In this paper, a model-based service composition approach is proposed. Given the user request, it is possible to first find a set of matching abstract web services (AWSs), and then pull relevant concrete web services (CWSs) based on the AWSs. The set of CWSs can be modelled as a Markov decision process (MDP). In addition, we model the environment as a fully probabilistic system, capturing changes of environment probabilistically. The environment model can be further composed with the MDP from the service models, obtaining a monolithic MDP. The policy of which corresponds the selection of concrete services. We demonstrate how probabilistic verification techniques can be used to find the optimal service selection strategy against their QoS and the environment change. A distinguished feature of our approach is that the QoS of services, as well as the dynamic of environment change, are made parametric, so that the formal analysis is adaptive to the environment which is of paramount importance for autonomous and self-adaptive systems. Examples and experiments confirm the feasibility of our approach

    Why reinvent the wheel: Let's build question answering systems together

    Get PDF
    Modern question answering (QA) systems need to flexibly integrate a number of components specialised to fulfil specific tasks in a QA pipeline. Key QA tasks include Named Entity Recognition and Disambiguation, Relation Extraction, and Query Building. Since a number of different software components exist that implement different strategies for each of these tasks, it is a major challenge to select and combine the most suitable components into a QA system, given the characteristics of a question. We study this optimisation problem and train classifiers, which take features of a question as input and have the goal of optimising the selection of QA components based on those features. We then devise a greedy algorithm to identify the pipelines that include the suitable components and can effectively answer the given question. We implement this model within Frankenstein, a QA framework able to select QA components and compose QA pipelines. We evaluate the effectiveness of the pipelines generated by Frankenstein using the QALD and LC-QuAD benchmarks. These results not only suggest that Frankenstein precisely solves the QA optimisation problem but also enables the automatic composition of optimised QA pipelines, which outperform the static Baseline QA pipeline. Thanks to this flexible and fully automated pipeline generation process, new QA components can be easily included in Frankenstein, thus improving the performance of the generated pipelines

    IoT-Based Smart Management of Healthcare Services in Hospital Buildings during COVID-19 and Future Pandemics

    Get PDF
    The paper aims to design and develop an innovative solution in the Smart Building context that increases guests' hospitality level during the COVID-19 and future pandemics in locations like hotels, conference locations, campuses, and hospitals. The solution supports features intending to control the number of occupants by online appointments, smart navigation, and queue management in the building through mobile phones and navigation to the desired location by highlighting interests and facilities. Moreover, checking the space occupancy, and automatic adjustment of the environmental features are the abilities that can be added to the proposed design in the future development. The proposed solution can address all mentioned issues regarding the smart building by integrating and utilizing various data sources collected by the internet of things (IoT) sensors. Then, storing and processing collected data in servers and finally sending the desired information to the end-users. Consequently, through the integration of multiple IoT technologies, a unique platform with minimal hardware usage and maximum adaptability for smart management of general and healthcare services in hospital buildings will be created

    Loghub: A Large Collection of System Log Datasets towards Automated Log Analytics

    Full text link
    Logs have been widely adopted in software system development and maintenance because of the rich system runtime information they contain. In recent years, the increase of software size and complexity leads to the rapid growth of the volume of logs. To handle these large volumes of logs efficiently and effectively, a line of research focuses on intelligent log analytics powered by AI (artificial intelligence) techniques. However, only a small fraction of these techniques have reached successful deployment in industry because of the lack of public log datasets and necessary benchmarking upon them. To fill this significant gap between academia and industry and also facilitate more research on AI-powered log analytics, we have collected and organized loghub, a large collection of log datasets. In particular, loghub provides 17 real-world log datasets collected from a wide range of systems, including distributed systems, supercomputers, operating systems, mobile systems, server applications, and standalone software. In this paper, we summarize the statistics of these datasets, introduce some practical log usage scenarios, and present a case study on anomaly detection to demonstrate how loghub facilitates the research and practice in this field. Up to the time of this paper writing, loghub datasets have been downloaded over 15,000 times by more than 380 organizations from both industry and academia.Comment: Dateset available at https://zenodo.org/record/322717
    • …
    corecore