10 research outputs found

    VNE solution for network differentiated QoS and security requirements: from the perspective of deep reinforcement learning

    Get PDF
    The rapid development and deployment of network services has brought a series of challenges to researchers. On the one hand, the needs of Internet end users/applications reflect the characteristics of travel alienation, and they pursue different perspectives of service quality. On the other hand, with the explosive growth of information in the era of big data, a lot of private information is stored in the network. End users/applications naturally start to pay attention to network security. In order to solve the requirements of differentiated quality of service (QoS) and security, this paper proposes a virtual network embedding (VNE) algorithm based on deep reinforcement learning (DRL), aiming at the CPU, bandwidth, delay and security attributes of substrate network. DRL agent is trained in the network environment constructed by the above attributes. The purpose is to deduce the mapping probability of each substrate node and map the virtual node according to this probability. Finally, the breadth first strategy (BFS) is used to map the virtual links. In the experimental stage, the algorithm based on DRL is compared with other representative algorithms in three aspects: long term average revenue, long term revenue consumption ratio and acceptance rate. The results show that the algorithm proposed in this paper has achieved good experimental results, which proves that the algorithm can be effectively applied to solve the end user/application differentiated QoS and security requirements

    Optimizing resource allocation for secure SDN-based virtual network migration

    Get PDF
    International audienceRecent evolutions in cloud infrastructures allowed service providers to tailor new services for demanding customers. Providing these services confronts the infrastructure providers with costs and constraints considerations. In particular, security constraints are a major concern for today's businesses as the leak of personal information would tarnish their reputation. Recent works provide examples on how an attacker may leverage the infrastructure's weaknesses to steal sensitive information from the users. Specifically, an attacker can leverage maintenance processes inside the infrastructure to conduct an attack. In this paper, we consider the migration of a virtual network as the maintenance process. Then we determine the optimal monitoring resources allocation in this context with a Markov Decision Process. This model takes into account the impact of monitoring the infrastructure, the migration process and finally how the attacker may chose particular targets in the infrastructure. We provide a working prototype implemented in Python

    Secure Virtual Network Embedding in a Multi-Cloud Environment

    Get PDF
    Recently-proposed virtualization platforms give Cloud users the freedom to specify their network topologies and addressing schemes. These platforms have, however, been targeting a single datacenter of a cloud provider, which is insufficient to support (critical) applications that need to be deployed across multiple trust domains while enforcing diverse security requirements. This paper addresses this problem by presenting a novel solution for a central component of network virtualization –the online network embedding, which finds efficient mappings of virtual networks requests onto the substrate network. Our solution considers security as a first class citizen, enabling the definition of flexible policies in three central areas: on the communications, where alternative security compromises can be explored (e.g.,encryption); on the computations, supporting redundancy if necessary while capitalizing on hardware assisted trusted executions; across multiples clouds, including public and private facilities, with the associated trust levels. We formulate the solution as a Mixed Integer Linear Program (MILP), and evaluate our proposal against the most commonly used alternative. Our analysis gives insight into the trade-offs involved with the inclusion of security and trust into network virtualization, providing evidence that this notion may enhance profits under the appropriate cost models

    A deep reinforcement learning-based algorithm for reliability-aware multi-domain service deployment in smart ecosystems

    Get PDF
    The final publication is available at Springer via http://dx.doi.org/10.1007/s00521-020-05372-xThe transition towards full network virtualization will see services for smart ecosystems including smart metering, healthcare and transportation among others, being deployed as Service Function Chains (SFCs) comprised of an ordered set of virtual network functions. However, since such services are usually deployed in remote cloud networks, the SFCs may transcend multiple domains belonging to different Infrastructure Providers (InPs), possibly with differing policies regarding billing and Quality-of-service (QoS) guarantees. Therefore, efficiently allocating the exhaustible network resources to the different SFCs while meeting the stringent requirements of the services such as delay and QoS among others, remains a complex challenge, especially under limited information disclosure by the InPs. In this work, we formulate the SFC deployment problem across multiple domains focusing on delay constraints, and propose a framework for SFC orchestration which adheres to the privacy requirements of the InPs. Then, we propose a reinforcement learning (RL)-based algorithm for partitioning the SFC request across the different InPs while considering service reliability across the participating InPs. Such RL-based algorithms have the intelligence to infer undisclosed InP information from historical data obtained from past experiences. Simulation results, considering both online and offline scenarios, reveal that the proposed algorithm results in up to 10% improvement in terms of acceptance ratio and provisioning cost compared to the benchmark algorithms, with up to more than 90% saving in execution time for large networks. In addition, the paper proposes an enhancement to a state-of-the-art algorithm which results in up to 5% improvement in terms of provisioning cost.This work has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 777067 (NECOS project) and the national project TEC2015-71329-C2-2-R (MINECO/FEDER). This work is also supported by the " Fundamental Research Funds for the Central Universities " of China University of Petroleum (East China) under Grant 18CX02139APeer ReviewedPostprint (author's final draft

    Efficient cloud computing system operation strategies

    Get PDF
    Cloud computing systems have emerged as a new paradigm of computing systems by providing on demand based services which utilize large size computing resources. Service providers offer Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) to users depending on their demand and users pay only for the user resources. The Cloud system has become a successful business model and is expanding its scope through collaboration with various applications such as big data processing, Internet of Things (IoT), robotics, and 5G networks. Cloud computing systems are composed of large numbers of computing, network, and storage devices across the geographically distributed area and multiple tenants employ the cloud systems simultaneously with heterogeneous resource requirements. Thus, efficient operation of cloud computing systems is extremely difficult for service providers. In order to maximize service providers\u27 profit, the cloud systems should be able to serve large numbers of tenants while minimizing the OPerational EXpenditure (OPEX). For serving as many tenants as possible tenants using limited resources, the service providers should implement efficient resource allocation for users\u27 requirements. At the same time, cloud infrastructure consumes a significant amount of energy. According to recent disclosures, Google data centers consumed nearly 300 million watts and Facebook\u27s data centers consumed 60 million watts. Explosive traffic demand for data centers will keep increasing because of expansion of mobile and cloud traffic requirements. If service providers do not develop efficient ways for energy management in their infrastructures, this will cause significant power consumption in running their cloud infrastructures. In this thesis, we consider optimal datasets allocation in distributed cloud computing systems. Our objective is to minimize processing time and cost. Processing time includes virtual machine processing time, communication time, and data transfer time. In distributed Cloud systems, communication time and data transfer time are important component of processing time because data centers are distributed geographically. If we place data sets far from each other, this increases the communication and data transfer time. The cost objective includes virtual machine cost, communication cost, and data transfer cost. Cloud service providers charge for virtual machine usage according to usage time of virtual machine. Communication cost and transfer cost are charged based on transmission speed of data and data set size. The problem of allocating data sets to VMs in distributed heterogeneous clouds is formulated as a linear programming model with two objectives: the cost and processing time. After finding optimal solutions of each objective function, we use a heuristic approach to find the Pareto front of multi-objective linear programming problem. In the simulation experiment, we consider a heterogeneous cloud infrastructure with five different types of cloud service provider resource information, and we optimize data set placement by guaranteeing Pareto optimality of the solutions. Also, this thesis proposes an adaptive data center activation model that consolidates adaptive activation of switches and hosts simultaneously integrated with a statistical request prediction algorithm. The learning algorithm predicts user requests in predetermined interval by using a cyclic window learning algorithm. Then the data center activates an optimal number of switches and hosts in order to minimize power consumption that is based on prediction. We designed an adaptive data center activation model by using a cognitive cycle composed of three steps: data collection, prediction, and activation. In the request prediction step, the prediction algorithm forecasts a Poisson distribution parameter lambda in every determined interval by using Maximum Likelihood Estimation (MLE) and Local Linear Regression (LLR) methods. Then, adaptive activation of the data center is implemented with the predicted parameter in every interval. The adaptive activation model is formulated as a Mixed Integer Linear Programming (MILP) model. Switches and hosts are modeled as M/M/1 and M/M/c queues. In order to minimize power consumption of data centers, the model minimizes the number of activated switches, hosts, and memory modules while guaranteeing Quality of Service (QoS). Since the problem is NP-hard, we use the Simulated Annealing algorithm to solve the model. We employ Google cluster trace data to simulate our prediction model. Then, the predicted data is employed to test adaptive activation model and observed energy saving rate in every interval. In the experiment, we could observe that the adaptive activation model saves 30 to 50% of energy compared to the full operation state of data center in practical utilization rates of data centers. Network Function Virtualization (NFV) emerged as a game changer in network market for efficient operation of the network infrastructure. Since NFV transforms the dedicated physical devices designed for specific network function to software-based Virtual Machines (VMs), the network operators expect to reduce a significant Capital Expenditure (CAPEX) and Operational Expenditure (OPEX). Softwarized VMs can be implemented on any commodity servers, so network operators can design flexible and scalable network architecture through efficient VM placement and migration algorithms. In this thesis, we study a joint problem of Virtualized Network Function (VNF) resource allocation and NFV-Service Chain (NFV-SC) placement problem in Software Defined Network (SDN) based hyper-scale distributed cloud computing infrastructure. The objective of the problem is minimizing the power consumption of the infrastructure while enforcing Service Level Agreement (SLA) of users. We employ an M/G/1/K queuing network approximation analysis for the NFV-SC model. The communication time between VNFs is considered in the NFV-SC placement because it influences the performance of NFV-SC in the highly distributed infrastructure environment. The joint problem is modeled by a Mixed Integer Non-linear Programming (MINP) model. However, the problem is intractable in large size infrastructures due to NP-hardness of the problem. We therefore propose a heuristic algorithm which splits the problem into two sub-problems: resource allocation and the NFV-SC embedding. In the numerical analysis, we could observe that the proposed algorithm outperforms the traditional bin packing algorithms in terms of power consumption and SLA assurance. In this thesis, we propose efficient cloud infrastructure management strategies from a single data center point of view to hyper-scale distributed cloud computing infrastructure for profitable cloud system operation. The management schemes are proposed with various objectives such as Quality of Service (Qos), performance, latency, and power consumption. We use efficient mathematical modeling strategies such as Linear Programming (LP), Mixed Integer Linear Programming (MILP), Mixed Integer Non-linear Programming(MINP), convex programming, queuing theory, and probabilistic modeling strategies and prove the efficiency of the proposed strategies through various simulations

    Towards a Virtualized Next Generation Internet

    Get PDF
    A promising solution to overcome the Internet ossification is network virtualization in which Internet Service Providers (ISPs) are decoupled into two tiers: service providers (SPs), and infrastructure providers (InPs). The former maintain and customize virtual network(s) to meet the service requirement of end-users, which is mapped to the physical network infrastructure that is managed and deployed by the latter via the Virtual Network Embedding (VNE) process. VNE consists of two major components: node assignment, and link mapping, which can be shown to be NP-Complete. In the first part of the dissertation, we present a path-based ILP model for the VNE problem. Our solution employs a branch-and-bound framework to resolve the integrity constraints, while embedding the column generation process to effectively obtain the lower bound for branch pruning. Different from existing approaches, the proposed solution can either obtain an optimal solution or a near-optimal solution with guarantee on the solution quality. A common strategy in VNE algorithm design is to decompose the problem into two sequential sub-problems: node assignment (NA) and link mapping (LM). With this approach, it is inexorable to sacrifice the solution quality since the NA is not holistic and not-reversible. In the second part, we are motivated to answer the question: Is it possible to maintain the simplicity of the Divide-and-Conquer strategy while still achieving optimality? Our answer is based on a decomposition framework supported by the Primal-Dual analysis of the path-based ILP model. This dissertation also attempts to address issues in two frontiers of network virtualization: survivability, and integration of optical substrate. In the third part, we address the survivable network embedding (SNE) problem from a network flow perspective, considering both splittable and non-splittable flows. In addition, the explosive growth of the Internet traffic calls for the support of a bandwidth abundant optical substrate, despite the extra dimensions of complexity caused by the heterogeneities of optical resources, and the physical feature of optical transmission. In this fourth part, we present a holistic view of motivation, architecture, and challenges on the way towards a virtualized optical substrate that supports network virtualization

    Security-aware Resource Allocation for Space-Air-Ground Integrated Network using Deep Reinforcement Learning

    Get PDF
    A Space-Air-Ground Integrated Network (SAGIN) has been proposed to extend communication network service coverage to consumer-oriented and industrial sectors where communication network coverage is either limited or unavailable. To effectively use the space, air, and ground hardware resources, Network Function Virtualization (NFV) is introduced into SAGIN. NFV enables the deployment and management of services that are represented as Virtual Networks (VN) composed of Virtual Network Functions (VNF) onto the SAGIN hardware through hardware virtualization. This enables SAGIN to support services with distinct demands from both consumer-oriented and industrial sectors. However, by introducing NFV into SAGIN, new security vulnerabilities arise. For instance, if a malicious entity gains access to the virtualized hardware, all services utilizing the hardware are exposed to attack. When deploying a VN onto the SAGIN hardware, also known as the Substrate Network (SN), it must be decided which SN Node (SNN) should host each VN Node (VNN) and which SN Links (SNL) should host each VN Link (VNL), also known as the Virtual Network Embedding (VNE) problem. This thesis proposes a solution to VNE in SAGIN using Deep Reinforcement Learning (DRL) while accounting for the security concerns related to NFV. To our knowledge, this has yet to be explored by other works. We compare our solution with the well-known Global Resource Capacity (GRC) solution strategy using the acceptance rate, revenue, cost, and revenue-to-cost metrics. Our DRL-based solution strategy shows competitive performance in all metrics

    Security-aware virtual network embedding

    No full text
    corecore