5,226 research outputs found

    Quantifying the Impact of Replication on the Quality-of-Service in Cloud Databases

    No full text
    Cloud databases achieve high availability by automatically replicating data on multiple nodes. However, the overhead caused by the replication process can lead to an increase in the mean and variance of transaction response times, causing unforeseen impacts on the offered quality-of-service (QoS). In this paper, we propose a measurement-driven methodology to predict the impact of replication on Database-as-a-Service (DBaaS) environments. Our methodology uses operational data to parameterize a closed queueing network model of the database cluster together with a Markov model that abstracts the dynamic replication process. Experiments on Amazon RDS show that our methodology predicts response time mean and percentiles with errors of just 1% and 15% respectively, and under operational conditions that are significantly different from the ones used for model parameterization. We show that our modeling approach surpasses standard modeling methods and illustrate the applicability of our methodology for automated DBaaS provisioning

    Scalable and Cost Efficient Algorithms for Virtual CDN Migration

    Full text link
    Virtual Content Delivery Network (vCDN) migration is necessary to optimize the use of resources and improve the performance of the overall SDN/NFV-based CDN function in terms of network operator cost reduction and high streaming quality. It requires intelligent and enticed joint SDN/NFV migration algorithms due to the evident huge amount of traffic to be delivered to end customers of the network. In this paper, two approaches for finding the optimal and near optimal path placement(s) and vCDN migration(s) are proposed (OPAC and HPAC). Moreover, several scenarios are considered to quantify the OPAC and HPAC behaviors and to compare their efficiency in terms of migration cost, migration time, vCDN replication number, and other cost factors. Then, they are implemented and evaluated under different network scales. Finally, the proposed algorithms are integrated in an SDN/NFV framework. Index Terms: vCDN; SDN/NFV Optimization; Migration Algorithms; Scalability Algorithms.Comment: 9 pages, 11 figures, 4 tableaux, conference Local Computer Networks (LCN), class

    Data center resilience assessment : storage, networking and security.

    Get PDF
    Data centers (DC) are the core of the national cyber infrastructure. With the incredible growth of critical data volumes in financial institutions, government organizations, and global companies, data centers are becoming larger and more distributed posing more challenges for operational continuity in the presence of experienced cyber attackers and occasional natural disasters. The main objective of this research work is to present a new methodology for data center resilience assessment, this methodology consists of: • Define Data center resilience requirements. • Devise a high level metric for data center resilience. • Design and develop a tool to validate and the metric. Since computer networks are an important component in the data center architecture, this research work was extended to investigate computer network resilience enhancement opportunities within the area of routing protocols, redundancy, and server load to minimize the network down time and increase the time period of resisting attacks. Data center resilience assessment is a complex process as it involves several aspects such as: policies for emergencies, recovery plans, variation in data center operational roles, hosted/processed data types and data center architectures. However, in this dissertation, storage, networking and security are emphasized. The need for resilience assessment emerged due to the gap in existing reliability, availability, and serviceability (RAS) measures. Resilience as an evaluation metric leads to better proactive perspective in system design and management. The proposed Data center resilience assessment portal (DC-RAP) is designed to easily integrate various operational scenarios. DC-RAP features a user friendly interface to assess the resilience in terms of performance analysis and speed recovery by collecting the following information: time to detect attacks, time to resist, time to fail and recovery time. Several set of experiments were performed, results obtained from investigating the impact of routing protocols, server load balancing algorithms on network resilience, showed that using particular routing protocol or server load balancing algorithm can enhance network resilience level in terms of minimizing the downtime and ensure speed recovery. Also experimental results for investigating the use social network analysis (SNA) for identifying important router in computer network showed that the SNA was successful in identifying important routers. This important router list can be used to redundant those routers to ensure high level of resilience. Finally, experimental results for testing and validating the data center resilience assessment methodology using the DC-RAP showed the ability of the methodology quantify data center resilience in terms of providing steady performance, minimal recovery time and maximum resistance-attacks time. The main contributions of this work can be summarized as follows: • A methodology for evaluation data center resilience has been developed. • Implemented a Data Center Resilience Assessment Portal (D$-RAP) for resilience evaluations. • Investigated the usage of Social Network Analysis to Improve the computer network resilience

    Analysis of power consumption in heterogeneous virtual machine environments

    Get PDF
    Reduction of energy consumption in Cloud computing datacenters today is a hot a research topic, as these consume large amounts of energy. Furthermore, most of the energy is used inefficiently because of the improper usage of computational resources such as CPU, storage and network. A good balance between the computing resources and performed workload is mandatory. In the context of data-intensive applications, a significant portion of energy is consumed just to keep alive virtual machines or to move data around without performing useful computation. Moreover, heterogeneity of resources increases the difficulty degree, when trying to achieve energy efficiency. Power consumption optimization requires identification of those inefficiencies in the underlying system and applications. Based on the relation between server load and energy consumption, we study the efficiency of data-intensive applications, and the penalties, in terms of power consumption, that are introduced by different degrees of heterogeneity of the virtual machines characteristics in a cluster
    • …
    corecore