147 research outputs found

    On Evaluating Commercial Cloud Services: A Systematic Review

    Full text link
    Background: Cloud Computing is increasingly booming in industry with many competing providers and services. Accordingly, evaluation of commercial Cloud services is necessary. However, the existing evaluation studies are relatively chaotic. There exists tremendous confusion and gap between practices and theory about Cloud services evaluation. Aim: To facilitate relieving the aforementioned chaos, this work aims to synthesize the existing evaluation implementations to outline the state-of-the-practice and also identify research opportunities in Cloud services evaluation. Method: Based on a conceptual evaluation model comprising six steps, the Systematic Literature Review (SLR) method was employed to collect relevant evidence to investigate the Cloud services evaluation step by step. Results: This SLR identified 82 relevant evaluation studies. The overall data collected from these studies essentially represent the current practical landscape of implementing Cloud services evaluation, and in turn can be reused to facilitate future evaluation work. Conclusions: Evaluation of commercial Cloud services has become a world-wide research topic. Some of the findings of this SLR identify several research gaps in the area of Cloud services evaluation (e.g., the Elasticity and Security evaluation of commercial Cloud services could be a long-term challenge), while some other findings suggest the trend of applying commercial Cloud services (e.g., compared with PaaS, IaaS seems more suitable for customers and is particularly important in industry). This SLR study itself also confirms some previous experiences and reveals new Evidence-Based Software Engineering (EBSE) lessons

    A context-aware framework for dynamic composition of process fragments in the internet of services

    Get PDF
    Abstract In the last decade, many approaches to automated service composition have been proposed. However, most of them do not fully exploit the opportunities offered by the Internet of Services (IoS). In this article, we focus on the dynamicity of the execution environment, that is, any change occurring at run-time that might affect the system, such as changes in service availability, service behavior, or characteristics of the execution context. We indicate that any IoS-based application strongly requires a composition framework that supports for the automation of all the phases of the composition life cycle, from requirements derivation, to synthesis, deployment and execution. Our solution to this ambitious problem is an AI planning-based composition framework that features abstract composition requirements and context-awareness. In the proposed approach most human-dependent tasks can be accomplished at design time and the few human intervention required at run time do not affect the system execution. To demonstrate our approach in action and evaluate it, we exploit the ASTRO-CAptEvo framework, simulating the operation of a fully automated IoS-based car logistics scenario in the Bremerhaven harbor

    All the Services Large and Micro: Revisiting Industrial Practice

    Get PDF
    Services computing is both, an academic field of study looking back at close to 15 years of fundamental research, as well as a vibrant area of industrial software engineering. Industrial practice in this area is notorious for its ever-changing nature, with the state of the art changing almost on a yearly basis based on the ebb and flow of various hypes and trends. In this paper, we provide a look "across the wall" into industrial services computing. We conducted an empirical study based on the service ecosystem of 42 companies, and report, among other aspects, how service-to-service communication is Abstract. Services computing is both, an academic field of study looking back at close to 15 years of fundamental research, as well as a vibrant area of industrial software engineering. Industrial practice in this area is notorious for its ever-changing nature, with the state of the art changing almost on a yearly basis based on the ebb and flow of various hypes and trends. In this paper, we provide a look "across the wall" into industrial services computing. We conducted an empirical study based on the service ecosystem of 42 companies, and report, among other aspects, how service-to-service communication is implemented, how service discovery works in practice, what Quality-of-Service metrics practitioners are most interested in, and how services are deployed and hosted. We argue that not all assumptions that are typical in academic papers in the field are justified based on industrial practice, and conclude the paper with recommendations for future research that is more aligned with the services industry

    Investigating into the Prevalence of Complex Event Processing and Predictive Analytics in the Transportation and Logistics Sector: Initial Findings From Scientific Literature

    Get PDF
    As ever new sensor solutions are invading people’s everyday lives and business processes, the use of the signals and events provided by the devices poses a challenge. Innovative ways of handling the large amount of data promise an effective and efficient means to overcome that challenge. With the help of complex event processing and predictive techniques, added value can be created. While complex event processing is able to process the multitude of signals coming from the sensors in a continuous manner, predictive analytics addresses the likelihood of a certain future state or behavior by detecting patterns from the signal database and predicting the future according to the detections. As to the transportation and logistics domain, processing the signal stream and predicting the future promises a big impact on the operations because the transportation and logistics sector is known as a very complex one. The complexity of the sector is linked with the many stakeholders taking part in a variety of operations and the partly high level of automation often being accompanied by manual processes. Hence, predictions help to prepare better for upcoming situations and challenges and, thus, to save resources and cost. The present paper is to investigate the prevalence of complex event processing and predictive analytics in logistics and transportation cases in the research literature in order to motivate a subsequent systematic literature view as the next step in the research endeavor

    Cloud service discovery and analysis: a unified framework

    Get PDF
    Over the past few years, cloud computing has been more and more attractive as a new computing paradigm due to high flexibility for provisioning on-demand computing resources that are used as services through the Internet. The issues around cloud service discovery have considered by many researchers in the recent years. However, in cloud computing, with the highly dynamic, distributed, the lack of standardized description languages, diverse services offered at different levels and non-transparent nature of cloud services, this research area has gained a significant attention. Robust cloud service discovery approaches will assist the promotion and growth of cloud service customers and providers, but will also provide a meaningful contribution to the acceptance and development of cloud computing. In this dissertation, we have proposed an automated cloud service discovery approach of cloud services. We have also conducted extensive experiments to validate our proposed approach. The results demonstrate the applicability of our approach and its capability of effectively identifying and categorizing cloud services on the Internet. Firstly, we develop a novel approach to build cloud service ontology. Cloud service ontology initially is built based on the National Institute of Standards and Technology (NIST) cloud computing standard. Then, we add new concepts to ontology by automatically analyzing real cloud services based on cloud service ontology Algorithm. We also propose cloud service categorization that use Term Frequency to weigh cloud service ontology concepts and calculate cosine similarity to measure the similarity between cloud services. The cloud service categorization algorithm is able to categorize cloud services to clusters for effective categorization of cloud services. In addition, we use Machine Learning techniques to identify cloud service in real environment. Our cloud service identifier is built by utilizing cloud service features extracted from the real cloud service providers. We determine several features such as similarity function, semantic ontology, cloud service description and cloud services components, to be used effectively in identifying cloud service on the Web. Also, we build a unified model to expose the cloud service’s features to a cloud service search user to ease the process of searching and comparison among a large amount of cloud services by building cloud service’s profile. Furthermore, we particularly develop a cloud service discovery Engine that has capability to crawl the Web automatically and collect cloud services. The collected datasets include meta-data of nearly 7,500 real-world cloud services providers and nearly 15,000 services (2.45GB). The experimental results show that our approach i) is able to effectively build automatic cloud service ontology, ii) is robust in identifying cloud service in real environment and iii) is more scalable in providing more details about cloud services.Thesis (Ph.D.) -- University of Adelaide, School of Computer Science, 201

    Corpus Statistics for Measuring Business Process Similarity

    Get PDF
    In a rapidly changing environment, organizations must adapt their business processes continuously. While numerous methods enable enterprises to conceptualize and analyze their organizational structure, the task of business process modeling remains complex and time-consuming. However, by reusing and adapting existing process models, enterprises can reduce the task’s complexity while improving the quality of results. To facilitate the identification of adaptable processes, several techniques of business process similarity (BPS) have been proposed in recent years. Although most approaches produce sound results in controlled evaluations, this paper argues that their applicability is limited when analyzing real-world processes, which do not fully comply with notational labeling specifications. Consequently, we aim to enhance existing BPS techniques by using corpus statistics to account for the explanatory power of words within labels of process models. Results from our evaluation suggest that corpus statistics can improve BPS computations and can positively influence the quality of practical implications

    Reducing Run-Time Adaptation Space via Analysis of Possible Utility Bounds

    Get PDF
    Self-adaptive systems often employ dynamic programming or similar techniques to select optimal adaptations at run-time. These techniques suffer from the “curse of dimensionality , increasing the cost of run-time adaptation decisions. We propose a novel approach that improves upon the state-of-the-art proactive self-adaptation techniques to reduce the number of possible adaptations that need be considered for each run-time adaptation decision. The approach, realized in a tool called Thallium, employs a combination of automated formal modeling techniques to (i) analyze a structural model of the system showing which configurations are reachable from other configurations and (ii) compute the utility that can be generated by the optimal adaptation over a bounded horizon in both the best- and worst-case scenarios. It then constructs triangular possibility values using those optimized bounds to automatically compare adjacent adaptations for each configuration, keeping only the alternatives with the best range of potential results. The experimental results corroborate Thallium’s ability to significantly reduce the number of states that need to be considered with each adaptation decision, freeing up vital resources at run-time

    Dynamic and goal-based quality management for human-based electronic services

    Get PDF
    Crowdsourcing in the form of human-based electronic services (people services) provides a powerful way of outsourcing tasks to a large crowd of remote workers over the Internet. Research has shown that multiple redundant results delivered by different workers can be aggregated in order to achieve a reliable result. However, basic implementations of this approach are rather inefficient as they multiply the effort for task execution and are not able to guarantee a certain quality level. In this paper we are addressing these challenges by elaborating on a statistical approach for quality management of people services which we had previously proposed. The approach combines elements of statistical quality management with dynamic group decisions. We present a comprehensive statistical model that enhances our original work and makes it more transparent. We also provide an extendible toolkit that implements our model and facilitates its application to real-time experiments as well as to simulations. A quantitative analysis based on an optical character recognition (OCR) scenario confirms the efficiency and reach of our model
    corecore