9 research outputs found

    Theory of Algorithm Suitability on Managing Radio Resources in Next Generation Mobile Networks

    Get PDF
    Beyond 2020, wireless networking model will be radically changed and oriented to business-driven concept as foreseen by the next generation mobile network (NGMN) alliance. As the available spectrum granted to a given operator is physically limited, new radio resource management techniques are required to ensure massive connectivity for wireless devices. Given this situation, we investigate in this paper how the key network functionalities as self-optimizing network (SON) must be thought to meet NGMN requirements. We propose therefore, algorithm suitability theory (AST) combined to the notion of network operator infrastructure convergence. The approach is based on software-defined networking (SDN) principle that allows an adaptability of the load balance algorithm to the dynamic network status. Besides, we use the concept of network function virtualization (NFV) that alleviates the constraint of confining the wireless devices to their home network operator only. Relying on these two technologies, we build AST through a lexicographic optimality criterion based on SPC (Status, Performance, and Complexity) order. Numerical results demonstrate a better network coverage verified by the improvement of metrics such as call blocking rate, spectrum efficiency, energy efficiency and load balance index

    Theory of Algorithm Suitability on Managing Radio Resources in Next Generation Mobile Networks

    Get PDF
    Beyond 2020, wireless networking model will be radically changed and oriented to business-driven concept as foreseen by the next generation mobile network (NGMN) alliance. As the available spectrum granted to a given operator is physically limited, new radio resource management techniques are required to ensure massive connectivity for wireless devices. Given this situation, we investigate in this paper how the key network functionalities as self-optimizing network (SON) must be thought to meet NGMN requirements. We propose therefore, algorithm suitability theory (AST) combined to the notion of network operator infrastructure convergence. The approach is based on software-defined networking (SDN) principle that allows an adaptability of the load balance algorithm to the dynamic network status. Besides, we use the concept of network function virtualization (NFV) that alleviates the constraint of confining the wireless devices to their home network operator only. Relying on these two technologies, we build AST through a lexicographic optimality criterion based on SPC (Status, Performance, and Complexity) order. Numerical results demonstrate a better network coverage verified by the improvement of metrics such as call blocking rate, spectrum efficiency, energy efficiency and load balance index

    Distributed and adaptive resource management in Cloud-assisted Cognitive Radio Vehicular Networks with hard reliability guarantees

    No full text
    In this contribution, we design and test the performance of a distributed and adaptive resource management controller, which allows the optimal exploitation of Cognitive Radio and soft-input/soft-output data fusion in Vehicular Access Networks. The ultimate goal is to allow energy and computing-limited car smartphones to utilize the available Vehicular-to-Infrastructure WiFi connections for performing traffic offloading towards local or remote Clouds by opportunistically acceding to a spectral-limited wireless backbone built up by multiple Roadside Units. For this purpose, we recast the afforded resource management problem into a suitable constrained stochastic Network Utility Maximization problem. Afterwards, we derive the optimal cognitive resource management controller, which dynamically allocates the access time-windows at the serving Roadside Units (i.e., the access points) together with the access rates and traffic flows at the served Vehicular Clients (i.e., the secondary users of the wireless backbone). Interestingly, the developed controller provides hard reliability guarantees to the Cloud Service Provider (i.e., the primary user of the wireless backbone) on a per-slot basis. Furthermore, it is also capable to self-acquire context information about the currently available bandwidth-energy resources, so as to quickly adapt to the mobility-induced abrupt changes of the state of the vehicular network, even in the presence of fadings, imperfect context information and intermittent Vehicular-to-Infrastructure connectivity. Finally, we develop a related access protocol, which supports a fully distributed and scalable implementation of the optimal controller

    Providing Secure and Reliable Communication for Next Generation Networks in Smart Cities

    Get PDF
    Finding a framework that provides continuous, reliable, secure and sustainable diversified smart city services proves to be challenging in today’s traditional cloud centralized solutions. This article envisions a Mobile Edge Computing (MEC) solution that enables node collaboration among IoT devices to provide reliable and secure communication between devices and the fog layer on one hand, and the fog layer and the cloud layer on the other hand. The solution assumes that collaboration is determined based on nodes’ resource capabilities and cooperation willingness. Resource capabilities are defined using ontologies, while willingness to cooperate is described using a three-factor node criteria, namely: nature, attitude and awareness. A learning method is adopted to identify candidates for the service composition and delivery process. We show that the system does not require extensive training for services to be delivered correct and accurate. The proposed solution reduces the amount of unnecessary traffic flow to and from the edge, by relying on nodeto-node communication protocols. Communication to the fog andcloud layers is used for more data and computing-extensive applications, hence, ensuring secure communication protocols to the cloud. Preliminary simulations are conducted to showcase the effectiveness of adapting the proposed framework to achieve smart city sustainability through service reliability and security. Results show that the proposed solution outperforms other semicooperative and non-cooperative service composition techniques in terms of efficient service delivery and composition delay, service hit ratio, and suspicious node identification

    Saving Energy in QoS Networked Data Centers

    Get PDF
    One of the major challenges that cloud providers face is minimizing power consumption of their data centers. To this point, majority of current research focuses on energy efficient management of resources in the Infrastructure as a Service model using virtualization and through virtual machine consolidation. However, current virtualized data centers are not designed for supporting communication–computing intensive real-time applications, such as, info-mobility applications, real-time video co-decoding. In fact, imposing hard-limits on the overall per-job delay forces the overall networked computing infrastructure to adapt quickly its resource utilization to the (possibly, unpredictable and abrupt) time fluctuations of the offered workload. Jointly, a promising approach for making networked data centers more energy-efficient is the use of traffic engineering-based method to dynamically adapt the number of active servers to match the current workload. Therefore, it is desirable to develop a flexible and robust resource allocation algorithm that automatically adapts to time-varying workload and pays close attention to the consumed energy in computing and communication in virtualized networked data centers (VNetDCs). In this thesis, we propose three new dynamic and adaptive energy-aware algorithms scheduling policies that model and manage VNetDCs. Our focuses are to propose i) admission control of the offered input traffic; ii) balanced control and dispatching of the admitted workload; iii) dynamic reconfiguration and consolidation of the Dynamic Voltage and Frequency Scaling (DVFS)-enabled Virtual Machines (VMs) instantiated onto the parallel computing platform; and, iv) rate control of the traffic injected into the TCP/IP mobile connection. Necessary and sufficient conditions for the feasibility and optimality of the proposed schedulers are also provided in closed-form. Specifically, the first approach, called VNetDC, the optimal minimum-energy scheduler for the joint adaptive load balancing and provisioning of the computing-plus-communication resources. VNetDC platforms have been considered which operate under hard real-time constraints. VNetDC has capability to adapt to the time-varying statistical features of the offered workload without requiring any a priori assumption and/or knowledge about the statistics of the processed data. Green- NetDC is the second scheduling policy that is a flexible and robust resource allocation algorithm that automatically adapts to time-varying workload and pays close attention to the consumed energy in computing and communication in VNetDCs. GreenNetDC not only ensures users the Quality of Service (through Service Level Agreements) but also achieves maximum energy saving and attains green cloud computing goals in a fully distributed fashion by utilizing the DVFS-based CPU frequencies. Finally, the last algorithm tested an efficient dynamic resource provisioning scheduler which applied in Networked Data Centers (NetDCs). This method is connected to (possibly, mobile) clients through TCP/IP-based vehicular backbones The salient features of this algorithm is that: i) It is adaptive and admits distributed scalable implementation; ii) It is capable to provide hard QoS guarantees, in terms of minimum/maximum instantaneous rate of the traffic delivered to the client, instantaneous goodput and total processing delay; and, iii) It explicitly accounts for the dynamic interaction between computing and networking resources, in order to maximize the resulting energy efficiency. Actual performance of the proposed scheduler in the presence of :i) client mobility; ii)wireless fading; iii)reconfiguration and two-thresholds consolidation costs of the underlying networked computing platform; and, iv)abrupt changes of the transport quality of the available TCP/IP mobile connection, is numerically tested and compared against the corresponding ones of some state-of-the-art static schedulers, under both synthetically generated and measured real-world workload traces

    Energy Saving in QoS Fog-supported Data Centers

    Get PDF
    One of the most important challenges that cloud providers face in the explosive growth of data is to reduce the energy consumption of their designed, modern data centers. The majority of current research focuses on energy-efficient resources management in the infrastructure as a service (IaaS) model through "resources virtualization" - virtual machines and physical machines consolidation. However, actual virtualized data centers are not supporting communication–computing intensive real-time applications, big data stream computing (info-mobility applications, real-time video co-decoding). Indeed, imposing hard-limits on the overall per-job computing-plus-communication delays forces the overall networked computing infrastructure to quickly adopt its resource utilization to the (possibly, unpredictable and abrupt) time fluctuations of the offered workload. Recently, Fog Computing centers are as promising commodities in Internet virtual computing platform that raising the energy consumption and making the critical issues on such platform. Therefore, it is expected to present some green solutions (i.e., support energy provisioning) that cover fog-supported delay-sensitive web applications. Moreover, the usage of traffic engineering-based methods dynamically keep up the number of active servers to match the current workload. Therefore, it is desirable to develop a flexible, reliable technological paradigm and resource allocation algorithm to pay attention the consumed energy. Furthermore, these algorithms could automatically adapt themselves to time-varying workloads, joint reconfiguration, and orchestration of the virtualized computing-plus-communication resources available at the computing nodes. Besides, these methods facilitate things devices to operate under real-time constraints on the allowed computing-plus-communication delay and service latency. The purpose of this thesis is: i) to propose a novel technological paradigm, the Fog of Everything (FoE) paradigm, where we detail the main building blocks and services of the corresponding technological platform and protocol stack; ii) propose a dynamic and adaptive energy-aware algorithm that models and manages virtualized networked data centers Fog Nodes (FNs), to minimize the resulting networking-plus-computing average energy consumption; and, iii) propose a novel Software-as-a-Service (SaaS) Fog Computing platform to integrate the user applications over the FoE. The emerging utilization of SaaS Fog Computing centers as an Internet virtual computing commodity is to support delay-sensitive applications. The main blocks of the virtualized Fog node, operating at the Middleware layer of the underlying protocol stack and comprises of: i) admission control of the offered input traffic; ii) balanced control and dispatching of the admitted workload; iii) dynamic reconfiguration and consolidation of the Dynamic Voltage and Frequency Scaling (DVFS)-enabled Virtual Machines (VMs) instantiated onto the parallel computing platform; and, iv) rate control of the traffic injected into the TCP/IP connection. The salient features of this algorithm are that: i) it is adaptive and admits distributed scalable implementation; ii) it has the capacity to provide hard QoS guarantees, in terms of minimum/maximum instantaneous rate of the traffic delivered to the client, instantaneous goodput and total processing delay; and, iii) it explicitly accounts for the dynamic interaction between computing and networking resources in order to maximize the resulting energy efficiency. Actual performance of the proposed scheduler in the presence of: i) client mobility; ii) wireless fading; iii) reconfiguration and two-thresholds consolidation costs of the underlying networked computing platform; and, iv) abrupt changes of the transport quality of the available TCP/IP mobile connection, is numerically tested and compared to the corresponding ones of some state-of-the-art static schedulers, under both synthetically generated and measured real-world workload traces
    corecore