323 research outputs found

    Software-Defined Cloud Computing: Architectural Elements and Open Challenges

    Full text link
    The variety of existing cloud services creates a challenge for service providers to enforce reasonable Software Level Agreements (SLA) stating the Quality of Service (QoS) and penalties in case QoS is not achieved. To avoid such penalties at the same time that the infrastructure operates with minimum energy and resource wastage, constant monitoring and adaptation of the infrastructure is needed. We refer to Software-Defined Cloud Computing, or simply Software-Defined Clouds (SDC), as an approach for automating the process of optimal cloud configuration by extending virtualization concept to all resources in a data center. An SDC enables easy reconfiguration and adaptation of physical resources in a cloud infrastructure, to better accommodate the demand on QoS through a software that can describe and manage various aspects comprising the cloud environment. In this paper, we present an architecture for SDCs on data centers with emphasis on mobile cloud applications. We present an evaluation, showcasing the potential of SDC in two use cases-QoS-aware bandwidth allocation and bandwidth-aware, energy-efficient VM placement-and discuss the research challenges and opportunities in this emerging area.Comment: Keynote Paper, 3rd International Conference on Advances in Computing, Communications and Informatics (ICACCI 2014), September 24-27, 2014, Delhi, Indi

    Scheduling Live-Migrations for Fast, Adaptable and Energy-Efficient Relocation Operations

    Get PDF
    International audienceEvery day, numerous VMs are migrated inside a datacenter to balance the load, save energy or prepare production servers for maintenance. Despite VM placement problems are carefully studied, the underlying migration scheduler rely on vague adhoc models. This leads to unnecessarily long and energy-intensive migrations. We present mVM, a new and extensible migration scheduler. mVM takes into account the VM memory workload and the network topology to estimate precisely the migration duration and take wiser scheduling decisions. mVM is implemented as a plugin of BtrPlace and can be customized with additional scheduling constraints to finely control the migrations. Experiments on a real testbed show mVM outperforms schedulers that cap the migration parallelism by a constant to reduce the completion time. Besides an optimal capping, mVM reduces the migration duration by 20.4% on average and the completion time by 28.1%. In a maintenance operation involving 96 VMs to migrate between 72 servers, mVM saves 21.5% Joules against BtrPlace. Finally, its current library of 6 constraints allows administrators to address temporal and energy concerns, for example to adapt the schedule and fit a power budget

    Saving Energy in QoS Networked Data Centers

    Get PDF
    One of the major challenges that cloud providers face is minimizing power consumption of their data centers. To this point, majority of current research focuses on energy efficient management of resources in the Infrastructure as a Service model using virtualization and through virtual machine consolidation. However, current virtualized data centers are not designed for supporting communication–computing intensive real-time applications, such as, info-mobility applications, real-time video co-decoding. In fact, imposing hard-limits on the overall per-job delay forces the overall networked computing infrastructure to adapt quickly its resource utilization to the (possibly, unpredictable and abrupt) time fluctuations of the offered workload. Jointly, a promising approach for making networked data centers more energy-efficient is the use of traffic engineering-based method to dynamically adapt the number of active servers to match the current workload. Therefore, it is desirable to develop a flexible and robust resource allocation algorithm that automatically adapts to time-varying workload and pays close attention to the consumed energy in computing and communication in virtualized networked data centers (VNetDCs). In this thesis, we propose three new dynamic and adaptive energy-aware algorithms scheduling policies that model and manage VNetDCs. Our focuses are to propose i) admission control of the offered input traffic; ii) balanced control and dispatching of the admitted workload; iii) dynamic reconfiguration and consolidation of the Dynamic Voltage and Frequency Scaling (DVFS)-enabled Virtual Machines (VMs) instantiated onto the parallel computing platform; and, iv) rate control of the traffic injected into the TCP/IP mobile connection. Necessary and sufficient conditions for the feasibility and optimality of the proposed schedulers are also provided in closed-form. Specifically, the first approach, called VNetDC, the optimal minimum-energy scheduler for the joint adaptive load balancing and provisioning of the computing-plus-communication resources. VNetDC platforms have been considered which operate under hard real-time constraints. VNetDC has capability to adapt to the time-varying statistical features of the offered workload without requiring any a priori assumption and/or knowledge about the statistics of the processed data. Green- NetDC is the second scheduling policy that is a flexible and robust resource allocation algorithm that automatically adapts to time-varying workload and pays close attention to the consumed energy in computing and communication in VNetDCs. GreenNetDC not only ensures users the Quality of Service (through Service Level Agreements) but also achieves maximum energy saving and attains green cloud computing goals in a fully distributed fashion by utilizing the DVFS-based CPU frequencies. Finally, the last algorithm tested an efficient dynamic resource provisioning scheduler which applied in Networked Data Centers (NetDCs). This method is connected to (possibly, mobile) clients through TCP/IP-based vehicular backbones The salient features of this algorithm is that: i) It is adaptive and admits distributed scalable implementation; ii) It is capable to provide hard QoS guarantees, in terms of minimum/maximum instantaneous rate of the traffic delivered to the client, instantaneous goodput and total processing delay; and, iii) It explicitly accounts for the dynamic interaction between computing and networking resources, in order to maximize the resulting energy efficiency. Actual performance of the proposed scheduler in the presence of :i) client mobility; ii)wireless fading; iii)reconfiguration and two-thresholds consolidation costs of the underlying networked computing platform; and, iv)abrupt changes of the transport quality of the available TCP/IP mobile connection, is numerically tested and compared against the corresponding ones of some state-of-the-art static schedulers, under both synthetically generated and measured real-world workload traces

    Efficient and Virtualized Scheduling for OFDM-Based High Mobility Wireless Communications Objects

    Get PDF
    Services providers (SPs) in the radio platform technology standard long term evolution (LTE) systems are enduring many challenges in order to accommodate the rapid expansion of mobile data usage. The modern technologies demonstrate new challenges to SPs, for example, reducing the cost of the capital and operating expenditures while supporting high data throughput per customer, extending battery life-per-charge of the cell phone devices, and supporting high mobility communications with fast and seamless handover (HO) networking architecture. In this thesis, a variety of optimized techniques aimed at providing innovative solutions for such challenges are explored. The thesis is divided into three parts. The first part outlines the benefits and challenges of deploying virtualized resource sharing concept. Wherein, SPs achieving a different schedulers policy are sharing evolved network B, allowing SPs to customize their efforts and provide service requirements; as a promising solution for reducing operational and capital expenditures, leading to potential energy savings, and supporting higher peak rates. The second part, formulates the optimized power allocation problem in a virtualized scheme in LTE uplink systems, aiming to extend the mobile devices’ battery utilization time per charge. While, the third part extrapolates a proposed hybrid-HO (HY-HO) technique, that can enhance the system performance in terms of latency and HO reliability at cell boundary for high mobility objects (up to 350 km/hr; wherein, HO will occur more frequent). The main contributions of this thesis are in designing optimal binary integer programmingbased and suboptimal heuristic (with complexity reduction) scheduling algorithms subject to exclusive and contiguous allocation, maximum transmission power, and rate constraints. Moreover, designing the HY-HO based on the combination of soft and hard HO was able to enhance the system performance in term of latency, interruption time and reliability during HO. The results prove that the proposed solutions effectively contribute in addressing the challenges caused by the demand for high data rates and power transmission in mobile networks especially in virtualized resources sharing scenarios that can support high data rates with improving quality of services (QoSs)

    Will SDN be part of 5G?

    Get PDF
    For many, this is no longer a valid question and the case is considered settled with SDN/NFV (Software Defined Networking/Network Function Virtualization) providing the inevitable innovation enablers solving many outstanding management issues regarding 5G. However, given the monumental task of softwarization of radio access network (RAN) while 5G is just around the corner and some companies have started unveiling their 5G equipment already, the concern is very realistic that we may only see some point solutions involving SDN technology instead of a fully SDN-enabled RAN. This survey paper identifies all important obstacles in the way and looks at the state of the art of the relevant solutions. This survey is different from the previous surveys on SDN-based RAN as it focuses on the salient problems and discusses solutions proposed within and outside SDN literature. Our main focus is on fronthaul, backward compatibility, supposedly disruptive nature of SDN deployment, business cases and monetization of SDN related upgrades, latency of general purpose processors (GPP), and additional security vulnerabilities, softwarization brings along to the RAN. We have also provided a summary of the architectural developments in SDN-based RAN landscape as not all work can be covered under the focused issues. This paper provides a comprehensive survey on the state of the art of SDN-based RAN and clearly points out the gaps in the technology.Comment: 33 pages, 10 figure

    Cloud computing: survey on energy efficiency

    Get PDF
    International audienceCloud computing is today’s most emphasized Information and Communications Technology (ICT) paradigm that is directly or indirectly used by almost every online user. However, such great significance comes with the support of a great infrastructure that includes large data centers comprising thousands of server units and other supporting equipment. Their share in power consumption generates between 1.1% and 1.5% of the total electricity use worldwide and is projected to rise even more. Such alarming numbers demand rethinking the energy efficiency of such infrastructures. However, before making any changes to infrastructure, an analysis of the current status is required. In this article, we perform a comprehensive analysis of an infrastructure supporting the cloud computing paradigm with regards to energy efficiency. First, we define a systematic approach for analyzing the energy efficiency of most important data center domains, including server and network equipment, as well as cloud management systems and appliances consisting of a software utilized by end users. Second, we utilize this approach for analyzing available scientific and industrial literature on state-of-the-art practices in data centers and their equipment. Finally, we extract existing challenges and highlight future research directions

    Novel Resource and Energy Management for 5G Integrated Backhaul/Fronthaul (5G-Crosshaul)

    Get PDF
    The integration of both fronthaul and backhaul into a single transport network (namely, 5G-Crosshaul) is envisioned for the future 5G transport networks. This requires a fully integrated and unified management of the fronthaul and backhaul resources in a cost-efficient, scalable and flexible way through the deployment of an SDN/NFV control framework. This paper presents the designed 5G-Crosshaul architecture, two selected SDN/NFV applications targeting for cost-efficient resource and energy usage: the Resource Management Application (RMA) and the Energy Management and Monitoring Application (EMMA). The former manages 5G-Crosshaul resources (network, computing and storage resources). The latter is a special version of RMA with the focus on the objectives of optimizing the energy consumption and minimizing the energy footprint of the 5G-Crosshaul infrastructure. Besides, EMMA is applied to the mmWave mesh network and the high speed train scenarios. In particular, we present the key application design with their main components and the interactions with each other and with the control plane, and then we present the proposed application optimization algorithms along with initial results. The first results demonstrate that the proposed RMA is able to cost-efficiently utilize the Crosshaul resources of heterogeneous technologies, while EMMA can achieve significant energy savings through energy-efficient routing of traffic flows. For experiments in real system, we also set up Proof of Concepts (PoCs) for both applications in order to perform real trials in the field.© 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works

    Study, evaluation and contributions to new algorithms for the embedding problem in a network virtualization environment

    Get PDF
    Network virtualization is recognized as an enabling technology for the future Internet. It aims to overcome the resistance of the current Internet to architectural change and to enable a new business model decoupling the network services from the underlying infrastructure. The problem of embedding virtual networks in a substrate network is the main resource allocation challenge in network virtualization and is usually referred to as the Virtual Network Embedding (VNE) problem. VNE deals with the allocation of virtual resources both in nodes and links. Therefore, it can be divided into two sub-problems: Virtual Node Mapping where virtual nodes have to be allocated in physical nodes and Virtual Link Mapping where virtual links connecting these virtual nodes have to be mapped to paths connecting the corresponding nodes in the substrate network. Application of network virtualization relies on algorithms that can instantiate virtualized networks on a substrate infrastructure, optimizing the layout for service-relevant metrics. This class of algorithms is commonly known as VNE algorithms. This thesis proposes a set of contributions to solve the research challenges of the VNE that have not been tackled by the research community. To do that, it performs a deep and comprehensive survey of virtual network embedding. The first research challenge identified is the lack of proposals to solve the virtual link mapping stage of VNE using single path in the physical network. As this problem is NP-hard, existing proposals solve it using well known shortest path algorithms that limit the mapping considering just one constraint. This thesis proposes the use of a mathematical multi-constraint routing framework called paths algebra to solve the virtual link mapping stage. Besides, the thesis introduces a new demand caused by virtual link demands into physical nodes acting as intermediate (hidden) hops in a path of the physical network. Most of the current VNE approaches are centralized. They suffer of scalability issues and provide a single point of failure. In addition, they are not able to embed virtual network requests arriving at the same time in parallel. To solve this challenge, this thesis proposes a distributed, parallel and universal virtual network embedding framework. The proposed framework can be used to run any existing embedding algorithm in a distributed way. Thereby, computational load for embedding multiple virtual networks is spread across the substrate network Energy efficiency is one of the main challenges in future networking environments. Network virtualization can be used to tackle this problem by sharing hardware, instead of requiring dedicated hardware for each instance. Until now, VNE algorithms do not consider energy as a factor for the mapping. This thesis introduces the energy aware VNE where the main objective is to switch off as many network nodes and interfaces as possible by allocating the virtual demands to a consolidated subset of active physical networking equipment. To evaluate and validate the aforementioned VNE proposals, this thesis helped in the development of a software framework called ALgorithms for Embedding VIrtual Networks (ALEVIN). ALEVIN allows to easily implement, evaluate and compare different VNE algorithms according to a set of metrics, which evaluate the algorithms and compute their results on a given scenario for arbitrary parameters
    • …
    corecore