14 research outputs found

    Big Data Analytics for QoS Prediction Through Probabilistic Model Checking

    Get PDF
    As competitiveness increases, being able to guaranting QoS of delivered services is key for business success. It is thus of paramount importance the ability to continuously monitor the workflow providing a service and to timely recognize breaches in the agreed QoS level. The ideal condition would be the possibility to anticipate, thus predict, a breach and operate to avoid it, or at least to mitigate its effects. In this paper we propose a model checking based approach to predict QoS of a formally described process. The continous model checking is enabled by the usage of a parametrized model of the monitored system, where the actual value of parameters is continuously evaluated and updated by means of big data tools. The paper also describes a prototype implementation of the approach and shows its usage in a case study.Comment: EDCC-2014, BIG4CIP-2014, Big Data Analytics, QoS Prediction, Model Checking, SLA compliance monitorin

    Mining Service Abstractions (NIER Track)

    Get PDF
    International audienceSeveral lines of research rely on the concept of service abstractions to enable the organization, the composition and the adaptation of services. However, what is still missing, is a systematic approach for extracting service abstractions out of the vast amount of services that are available all over theWeb. To deal with this issue, we propose an approach for mining service abstractions, based on an agglomerative clustering algorithm. Our experimental ndings suggest that the approach is promising and can serve as a basis for future research

    Model-based dynamic QoS-driven service composition

    Full text link

    Self-tuning of software systems through goal-based feedback control loop

    Get PDF
    Quality requirements of a software system cannot be optimally met, especially when it is running in an uncertain and changing environment. In principle, a controller at runtime can monitor the change impact on quality requirements of the system, update the expectations and priorities from the environment, and take reasonable actions to improve the overall satisfaction. In practice, however, existing controllers are mostly designed for tuning low- level performance indicators rather than high-level requirements. By maintaining a live goal model to represent the runtime requirements and linking the overall satisfaction to an earned business value indicator as feedback, we propose a control-theoretic self-tuning method that can dynamically tune the preferences of different quality requirements, and can autonomously make the tradeoff decisions among different quality requirements through our preference-based goal reasoning. The reasoning result is involved to reconfigure the variation points of the goal model, and accordingly mapped to the system architecture reconfiguration. The effectiveness of our self-tuning method is evaluated by comparing the earned business value with the static and ad-hoc methods and analysing the self-tuning process

    Many-Objective Optimization of Non-Functional Attributes based on Refactoring of Software Models

    Full text link
    Software quality estimation is a challenging and time-consuming activity, and models are crucial to face the complexity of such activity on modern software applications. In this context, software refactoring is a crucial activity within development life-cycles where requirements and functionalities rapidly evolve. One main challenge is that the improvement of distinctive quality attributes may require contrasting refactoring actions on software, as for trade-off between performance and reliability (or other non-functional attributes). In such cases, multi-objective optimization can provide the designer with a wider view on these trade-offs and, consequently, can lead to identify suitable refactoring actions that take into account independent or even competing objectives. In this paper, we present an approach that exploits NSGA-II as the genetic algorithm to search optimal Pareto frontiers for software refactoring while considering many objectives. We consider performance and reliability variations of a model alternative with respect to an initial model, the amount of performance antipatterns detected on the model alternative, and the architectural distance, which quantifies the effort to obtain a model alternative from the initial one. We applied our approach on two case studies: a Train Ticket Booking Service, and CoCoME. We observed that our approach is able to improve performance (by up to 42\%) while preserving or even improving the reliability (by up to 32\%) of generated model alternatives. We also observed that there exists an order of preference of refactoring actions among model alternatives. We can state that performance antipatterns confirmed their ability to improve performance of a subject model in the context of many-objective optimization. In addition, the metric that we adopted for the architectural distance seems to be suitable for estimating the refactoring effort.Comment: Accepted for publication in Information and Software Technologies. arXiv admin note: substantial text overlap with arXiv:2107.0612

    Variability in Software Systems – Extracted Data and Supplementary Material from a Systematic Literature Review

    Get PDF

    Analysing Cloud QoS Prediction Approaches and Its Control Parameters: Considering Overall Accuracy and Freshness of a Dataset

    Get PDF
    Service level agreement (SLA) management is one of the key issues in cloud computing. The primary goal of a service provider is to minimize the risk of service violations, as these results in penalties in terms of both money and a decrease in trustworthiness. To avoid SLA violations, the service provider needs to predict the likelihood of violation for each SLO and its measurable characteristics (QoS parameters) and take immediate action to avoid violations occurring. There are several approaches discussed in the literature to predict service violation; however, none of these explores how a change in control parameters and the freshness of data impact prediction accuracy and result in the effective management of an SLA of the cloud service provider. The contribution of this paper is two-fold. First, we analyzed the accuracy of six widely used prediction algorithms - simple exponential smoothing, simple moving average, weighted moving average, Holt-Winter double exponential smoothing, extrapolation, and the autoregressive integrated moving average - by varying their individual control parameters. Each of the approaches is compared to 10 different datasets at different time intervals between 5 min and 4 weeks. Second, we analyzed the prediction accuracy of the simple exponential smoothing method by considering the freshness of a data; i.e., how the accuracy varies in the initial time period of prediction compared to later ones. To achieve this, we divided the cloud QoS dataset into sets of input values that range from 100 to 500 intervals in sets of 1-100, 1-200, 1-300, 1-400, and 1-500. From the analysis, we observed that different prediction methods behave differently based on the control parameter and the nature of the dataset. The analysis helps service providers choose a suitable prediction method with optimal control parameters so that they can obtain accurate prediction results to manage SLA intelligently and avoid violation penalties

    Linha de produtos de software dinâmica direcionada por qualidade : o caso de redes de monitoração do corpo humano

    Get PDF
    Dissertação (mestrado)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2012.Na atualidade, os indivíduos passam a ter uma posição mais ativa no processo de investigação de doenças querendo acompanhar o seu estado de saúde continuamente. Por ser é inviável manter um profissional de saúde para cada indivíduo, mais apoio da tecnologia tem sido requerido a fim de auxiliar esse processo de monitoração. Diante deste quadro, mais soluções automatizadas estão sendo propostas, em particular, Redes de Sensores do Corpo Humano (RSCH), no qual um indivíduo monitora suas atividades diárias e sinais vitais e o sistema o auxilia na prevenção e detecção de situações de emergência. Este trabalho explora como a metodologia de Linha de Produto de Software Dinâmica (LPSD) no contexto de RSCH gerencia e balanceia requisitos conflitantes, tais como disponibilidade e confiabilidade, de tal forma que quando o indivíduo estiver em uma situação normal de saúde, o sistema possa desativar alguns sensores ou funcionalidades visando economia de bateria e processamento; e por outro lado, quando o indivíduo desmaiar ou alterar seus batimentos cardíacos, o oposto deva acontecer com os sensores afim de se prover o melhor serviço para o indivíduo em uma situação de alto risco de saúde. Uma LPSD para RSCH se reconfigura baseando-se em mudanças de contexto, no caso, mudança na situação de saúde do indivíduo monitorado, afim de atingir um novo objetivo de qualidade para esta nova situação de risco. Neste trabalho, a situação de um indivíduo é especificada como um contrato de qualidade, provido por um especialista no domínio (médico). O contrato é modelado como uma máquina de estados, onde as transições entre estados são causadas por eventos de saúde (queda, desmaio, alteração de pressão) e os estados definem objetivos de qualidade. A verificação de não conformidade com o objetivo de qualidade motiva a reconfiguração do sistema. A confiabilidade de uma determinada configuração é medida como uma única fórmula, parametrizada com a presença e ausência das features da LPSD e das qualidades associadas a elas. Além de confiabilidade, exploram-se também parâmetros de qualidade tais como tempo de vida estimado para o sistema, taxa de amostragem, qualidade e quantidade de informação das configurações. As estratégias de cálculo de qualidade Simple MultiAttribute Rating Technique (SMART) e orientação a objetivos (GOAL) são comparadas no domínio de RSCH. Avaliou-se a abordagem proposta via simulações com dados reais de monitoração e obteve-se resultado favorável à utilização da metodologia proposta no contexto. _______________________________________________________________________________________________________________________________ ABSTRACTNowadays, individuals have a more active stance in the investigation of diseases in the sense that they want to monitor their health status continuously. Because it is not sustainable to have dedicated health professional for each individual, more technology support has been applied to assist this monitoring process. In this context, automatic solutions are being proposed, in particular Body Sensor Network (BSN), in which an individual monitors his vital signs and the system aids him in the prevention and detection of emergency situations. BSN must manage and balance conflicting requirements, such as availability and reliability, in a way that if the patient is in a normal or low health risk situation, the system can turn off some sensors or disable features to save power and processing. On the other hand, when the individual faints or changes its heartbeats dangerously, the opposite should happen with the sensors and features in order to provide the best service for this high risk situation. We explore how Dynamic Software Product Line (DSPL) achieves this goal. A DSPL reconfigures itself based on some context changes e.g., the persons' medical situation, to meet a new quality goal for that new situation, as specified by a reliability contract provided by the domain expert (a medical doctor). This contract is modeled as a state machine, whose transitions are medical events (e.g., fall, stroke) and states are target reliability goals, prompting a reconfiguration to meet it. The reliability of any given configuration is measured by a single formula, parametrizing over the features of the DSPL and related quality information. Besides reliability, we also explore other quality parameters such as lifetime, sensor sample rate, quality and amount of information. Strategies for calculating quality such as Simple MultiAttribute Rating Technique (SMART) and goal-oriented are compared in the BSN domain. We evaluated the proposed approach via simulations with real monitoring data and obtained favorable results with the use of the proposed methodology in the BSN context
    corecore