5,772 research outputs found

    Quantifying cloud performance and dependability:Taxonomy, metric design, and emerging challenges

    Get PDF
    In only a decade, cloud computing has emerged from a pursuit for a service-driven information and communication technology (ICT), becoming a significant fraction of the ICT market. Responding to the growth of the market, many alternative cloud services and their underlying systems are currently vying for the attention of cloud users and providers. To make informed choices between competing cloud service providers, permit the cost-benefit analysis of cloud-based systems, and enable system DevOps to evaluate and tune the performance of these complex ecosystems, appropriate performance metrics, benchmarks, tools, and methodologies are necessary. This requires re-examining old system properties and considering new system properties, possibly leading to the re-design of classic benchmarking metrics such as expressing performance as throughput and latency (response time). In this work, we address these requirements by focusing on four system properties: (i) elasticity of the cloud service, to accommodate large variations in the amount of service requested, (ii) performance isolation between the tenants of shared cloud systems and resulting performance variability, (iii) availability of cloud services and systems, and (iv) the operational risk of running a production system in a cloud environment. Focusing on key metrics for each of these properties, we review the state-of-the-art, then select or propose new metrics together with measurement approaches. We see the presented metrics as a foundation toward upcoming, future industry-standard cloud benchmarks

    Robustness-Driven Resilience Evaluation of Self-Adaptive Software Systems

    Get PDF
    An increasingly important requirement for certain classes of software-intensive systems is the ability to self-adapt their structure and behavior at run-time when reacting to changes that may occur to the system, its environment, or its goals. A major challenge related to self-adaptive software systems is the ability to provide assurances of their resilience when facing changes. Since in these systems, the components that act as controllers of a target system incorporate highly complex software, there is the need to analyze the impact that controller failures might have on the services delivered by the system. In this paper, we present a novel approach for evaluating the resilience of self-adaptive software systems by applying robustness testing techniques to the controller to uncover failures that can affect system resilience. The approach for evaluating resilience, which is based on probabilistic model checking, quantifies the probability of satisfaction of system properties when the target system is subject to controller failures. The feasibility of the proposed approach is evaluated in the context of an industrial middleware system used to monitor and manage highly populated networks of devices, which was implemented using the Rainbow framework for architecture-based self-adaptation

    A risk index model for security incident prioritisation

    Get PDF
    With thousands of incidents identified by security appliances every day, the process of distinguishing which incidents are important and which are trivial is complicated. This paper proposes an incident prioritisation model, the Risk Index Model (RIM), which is based on risk assessment and the Analytic Hierarchy Process (AHP). The model uses indicators, such as criticality, maintainability, replaceability, and dependability as decision factors to calculate incidents’ risk index. The RIM was validated using the MIT DARPA LLDOS 1.0 dataset, and the results were compared against the combined priorities of the Common Vulnerability Scoring System (CVSS) v2 and Snort Priority. The experimental results have shown that 100% of incidents could be rated with RIM, compared to only 17.23% with CVSS. In addition, this study also improves the limitation of group priority in the Snort Priority (e.g. high, medium and low priority) by quantitatively ranking, sorting and listing incidents according to their risk index. The proposed study has also investigated the effect of applying weighted indicators at the calculation of the risk index, as well as the effect of calculating them dynamically. The experiments have shown significant changes in the resultant risk index as well as some of the top priority rankings

    Multi-criteria analysis of measures in benchmarking: Dependability benchmarking as a case study

    Full text link
    This is the author’s version of a work that was accepted for publication in The Journal of Systems and Software. Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Multi-criteria analysis of measures in benchmarking: Dependability benchmarking as a case study. Journal of Systems and Software, 111, 2016. DOI 10.1016/j.jss.2015.08.052.Benchmarks enable the comparison of computer-based systems attending to a variable set of criteria, such as dependability, security, performance, cost and/or power consumption. It is not despite its difficulty, but rather its mathematical accuracy that multi-criteria analysis of results remains today a subjective process rarely addressed in an explicit way in existing benchmarks. It is thus not surprising that industrial benchmarks only rely on the use of a reduced set of easy-to-understand measures, specially when considering complex systems. This is a way to keep the process of result interpretation straightforward, unambiguous and accurate. However, it limits at the same time the richness and depth of the analysis process. As a result, the academia prefers to characterize complex systems with a wider set of measures. Marrying the requirements of industry and academia in a single proposal remains a challenge today. This paper addresses this question by reducing the uncertainty of the analysis process using quality (score-based) models. At measure definition time, these models make explicit (i) which are the requirements imposed to each type of measure, that may vary from one context of use to another, and (ii) which is the type, and intensity, of the relation between considered measures. At measure analysis time, they provide a consistent, straightforward and unambiguous method to interpret resulting measures. The methodology and its practical use are illustrated through three different case studies from the dependability benchmarking domain, a domain where various different criteria, including both performance and dependability, are typically considered during analysis of benchmark results.. Although the proposed approach is limited to dependability benchmarks in this document, its usefulness for any type of benchmark seems quite evident attending to the general formulation of the provided solution. © 2015 Elsevier Inc. All rights reserved.This work is partially supported by the Spanish project ARENES (TIN2012-38308-C02-01), ANR French project AMORES (ANR-11-INSE-010), the Intel Doctoral Student Honour Programme 2012, and the "Programa de Ayudas de Investigacion y Desarrollo" (PAID) from the Universitat Politecnica de Valencia.Friginal López, J.; Martínez, M.; De Andrés, D.; Ruiz, J. (2016). Multi-criteria analysis of measures in benchmarking: Dependability benchmarking as a case study. Journal of Systems and Software. 111:105-118. https://doi.org/10.1016/j.jss.2015.08.052S10511811
    • …
    corecore