695 research outputs found

    Big Data Analytics for QoS Prediction Through Probabilistic Model Checking

    Get PDF
    As competitiveness increases, being able to guaranting QoS of delivered services is key for business success. It is thus of paramount importance the ability to continuously monitor the workflow providing a service and to timely recognize breaches in the agreed QoS level. The ideal condition would be the possibility to anticipate, thus predict, a breach and operate to avoid it, or at least to mitigate its effects. In this paper we propose a model checking based approach to predict QoS of a formally described process. The continous model checking is enabled by the usage of a parametrized model of the monitored system, where the actual value of parameters is continuously evaluated and updated by means of big data tools. The paper also describes a prototype implementation of the approach and shows its usage in a case study.Comment: EDCC-2014, BIG4CIP-2014, Big Data Analytics, QoS Prediction, Model Checking, SLA compliance monitorin

    Runtime model checking for sla compliance monitoring and qos prediction

    Get PDF
    Sophisticated workflows, where multiple parties cooperate towards the achievement of a shared goal are today common. In a market-oriented setup, it is key that effective mechanisms be available for providing accountability within the business process. The challenge is to be able to continuously monitor the progress of the business process, ideally,anticipating contract breaches and triggering corrective actions. In this paper we propose a novel QoS prediction approach which combines runtime monitoring of the real system with probabilistic model-checking on a parametric system model. To cope with the huge amount of data generated by the monitored system, while ensuring that parameters are extracted in a timing fashion, we relied on big data analytics solutions. To validate the proposed approach, a prototype of the QoS prediction framework has been developed, and an experimental campaign has been conducted with respect to a case study in the field of Smart Grids

    Comparing time series with machine learning-based prediction approaches for violation management in cloud SLAs

    Get PDF
    © 2018 In cloud computing, service level agreements (SLAs) are legal agreements between a service provider and consumer that contain a list of obligations and commitments which need to be satisfied by both parties during the transaction. From a service provider's perspective, a violation of such a commitment leads to penalties in terms of money and reputation and thus has to be effectively managed. In the literature, this problem has been studied under the domain of cloud service management. One aspect required to manage cloud services after the formation of SLAs is to predict the future Quality of Service (QoS) of cloud parameters to ascertain if they lead to violations. Various approaches in the literature perform this task using different prediction approaches however none of them study the accuracy of each. However, it is important to do this as the results of each prediction approach vary according to the pattern of the input data and selecting an incorrect choice of a prediction algorithm could lead to service violation and penalties. In this paper, we test and report the accuracy of time series and machine learning-based prediction approaches. In each category, we test many different techniques and rank them according to their order of accuracy in predicting future QoS. Our analysis helps the cloud service provider to choose an appropriate prediction approach (whether time series or machine learning based) and further to utilize the best method depending on input data patterns to obtain an accurate prediction result and better manage their SLAs to avoid violation penalties

    Approximate Computing Survey, Part I: Terminology and Software & Hardware Approximation Techniques

    Full text link
    The rapid growth of demanding applications in domains applying multimedia processing and machine learning has marked a new era for edge and cloud computing. These applications involve massive data and compute-intensive tasks, and thus, typical computing paradigms in embedded systems and data centers are stressed to meet the worldwide demand for high performance. Concurrently, the landscape of the semiconductor field in the last 15 years has constituted power as a first-class design concern. As a result, the community of computing systems is forced to find alternative design approaches to facilitate high-performance and/or power-efficient computing. Among the examined solutions, Approximate Computing has attracted an ever-increasing interest, with research works applying approximations across the entire traditional computing stack, i.e., at software, hardware, and architectural levels. Over the last decade, there is a plethora of approximation techniques in software (programs, frameworks, compilers, runtimes, languages), hardware (circuits, accelerators), and architectures (processors, memories). The current article is Part I of our comprehensive survey on Approximate Computing, and it reviews its motivation, terminology and principles, as well it classifies and presents the technical details of the state-of-the-art software and hardware approximation techniques.Comment: Under Review at ACM Computing Survey

    Performance Evaluation of Software using Formal Methods

    Get PDF
    Formal Methods (FMs) can be used in varied areas of applications and to solve critical and fundamental problems of Performance Evaluation (PE). Modelling and analysis techniques can be used for both system and software performance evaluation. The functional features and performance properties of modern software used for performance evaluation has become so intertwined. Traditional models and methods for performance evaluation has been studied widely which culminated into the modern models and methods for system and software engineering evaluation such as formal methods. Techniques have transcended from functionality to performance modeling and analysis. Formal models help in identifying faulty reasoning far earlier than in traditional design; and formal specification has proved useful even on already existing software and systems. Formal approach eliminates ambiguity. The basic and final goal of the performance evaluation technique is to come to a conclusion, whether the software and system are working in a good condition or satisfactorily

    White learning methodology: a case study of cancer-related disease factors analysis in real-time PACS environment

    Get PDF
    Bayesian network is a probabilistic model of which the prediction accuracy may not be one of the highest in the machine learning family. Deep learning (DL) on the other hand possess of higher predictive power than many other models. How reliable the result is, how it is deduced, how interpretable the prediction by DL mean to users, remain obscure. DL functions like a black box. As a result, many medical practitioners are reductant to use deep learning as the only tool for critical machine learning application, such as aiding tool for cancer diagnosis. In this paper, a framework of white learning is being proposed which takes advantages of both black box learning and white box learning. Usually, black box learning will give a high standard of accuracy and white box learning will provide an explainable direct acyclic graph. According to our design, there are 3 stages of White Learning, loosely coupled WL, semi coupled WL and tightly coupled WL based on degree of fusion of the white box learning and black box learning. In our design, a case of loosely coupled WL is tested on breast cancer dataset. This approach uses deep learning and an incremental version of Naïve Bayes network. White learning is largely defied as a systemic fusion of machine learning models which result in an explainable Bayes network which could find out the hidden relations between features and class and deep learning which would give a higher accuracy of prediction than other algorithms. We designed a series of experiments for this loosely coupled WL model. The simulation results show that using WL compared to standard black-box deep learning, the levels of accuracy and kappa statistics could be enhanced up to 50%. The performance of WL seems more stable too in extreme conditions such as noise and high dimensional data. The relations by Bayesian network of WL are more concise and stronger in affinity too. The experiments results deliver positive signals that WL is possible to output both high classification accuracy and explainable relations graph between features and class. [Abstract copyright: Copyright © 2020. Published by Elsevier B.V.
    • …
    corecore