1,353 research outputs found
An Energy-driven Network Function Virtualization for Multi-domain Software Defined Networks
Network Functions Virtualization (NFV) in Software Defined Networks (SDN)
emerged as a new technology for creating virtual instances for smooth execution
of multiple applications. Their amalgamation provides flexible and programmable
platforms to utilize the network resources for providing Quality of Service
(QoS) to various applications. In SDN-enabled NFV setups, the underlying
network services can be viewed as a series of virtual network functions (VNFs)
and their optimal deployment on physical/virtual nodes is considered a
challenging task to perform. However, SDNs have evolved from single-domain to
multi-domain setups in the recent era. Thus, the complexity of the underlying
VNF deployment problem in multi-domain setups has increased manifold. Moreover,
the energy utilization aspect is relatively unexplored with respect to an
optimal mapping of VNFs across multiple SDN domains. Hence, in this work, the
VNF deployment problem in multi-domain SDN setup has been addressed with a
primary emphasis on reducing the overall energy consumption for deploying the
maximum number of VNFs with guaranteed QoS. The problem in hand is initially
formulated as a "Multi-objective Optimization Problem" based on Integer Linear
Programming (ILP) to obtain an optimal solution. However, the formulated ILP
becomes complex to solve with an increasing number of decision variables and
constraints with an increase in the size of the network. Thus, we leverage the
benefits of the popular evolutionary optimization algorithms to solve the
problem under consideration. In order to deduce the most appropriate
evolutionary optimization algorithm to solve the considered problem, it is
subjected to different variants of evolutionary algorithms on the widely used
MOEA framework (an open source java framework based on multi-objective
evolutionary algorithms).Comment: Accepted for publication in IEEE INFOCOM 2019 Workshop on Intelligent
Cloud Computing and Networking (ICCN 2019
Algorithms for advance bandwidth reservation in media production networks
Media production generally requires many geographically distributed actors (e.g., production houses, broadcasters, advertisers) to exchange huge amounts of raw video and audio data. Traditional distribution techniques, such as dedicated point-to-point optical links, are highly inefficient in terms of installation time and cost. To improve efficiency, shared media production networks that connect all involved actors over a large geographical area, are currently being deployed. The traffic in such networks is often predictable, as the timing and bandwidth requirements of data transfers are generally known hours or even days in advance. As such, the use of advance bandwidth reservation (AR) can greatly increase resource utilization and cost efficiency. In this paper, we propose an Integer Linear Programming formulation of the bandwidth scheduling problem, which takes into account the specific characteristics of media production networks, is presented. Two novel optimization algorithms based on this model are thoroughly evaluated and compared by means of in-depth simulation results
Towards delay-aware container-based Service Function Chaining in Fog Computing
Recently, the fifth-generation mobile network (5G) is getting significant attention. Empowered by Network Function Virtualization (NFV), 5G networks aim to support diverse services coming from different business verticals (e.g. Smart Cities, Automotive, etc). To fully leverage on NFV, services must be connected in a specific order forming a Service Function Chain (SFC). SFCs allow mobile operators to benefit from the high flexibility and low operational costs introduced by network softwarization. Additionally, Cloud computing is evolving towards a distributed paradigm called Fog Computing, which aims to provide a distributed cloud infrastructure by placing computational resources close to end-users. However, most SFC research only focuses on Multi-access Edge Computing (MEC) use cases where mobile operators aim to deploy services close to end-users. Bi-directional communication between Edges and Cloud are not considered in MEC, which in contrast is highly important in a Fog environment as in distributed anomaly detection services. Therefore, in this paper, we propose an SFC controller to optimize the placement of service chains in Fog environments, specifically tailored for Smart City use cases. Our approach has been validated on the Kubernetes platform, an open-source orchestrator for the automatic deployment of micro-services. Our SFC controller has been implemented as an extension to the scheduling features available in Kubernetes, enabling the efficient provisioning of container-based SFCs while optimizing resource allocation and reducing the end-to-end (E2E) latency. Results show that the proposed approach can lower the network latency up to 18% for the studied use case while conserving bandwidth when compared to the default scheduling mechanism
Introducing mobile edge computing capabilities through distributed 5G Cloud Enabled Small Cells
Current trends in broadband mobile networks are addressed towards the placement of different capabilities at the edge of the mobile network in a centralised way. On one hand, the split of the eNB between baseband processing units and remote radio headers makes it possible to process some of the protocols in centralised premises, likely with virtualised resources. On the other hand, mobile edge computing makes use of processing and storage capabilities close to the air interface in order to deploy optimised services with minimum delay. The confluence of both trends is a hot topic in the definition of future 5G networks. The full centralisation of both technologies in cloud data centres imposes stringent requirements to the fronthaul connections in terms of throughput and latency. Therefore, all those cells with limited network access would not be able to offer these types of services. This paper proposes a solution for these cases, based on the placement of processing and storage capabilities close to the remote units, which is especially well suited for the deployment of clusters of small cells. The proposed cloud-enabled small cells include a highly efficient microserver with a limited set of virtualised resources offered to the cluster of small cells. As a result, a light data centre is created and commonly used for deploying centralised eNB and mobile edge computing functionalities. The paper covers the proposed architecture, with special focus on the integration of both aspects, and possible scenarios of application.Peer ReviewedPostprint (author's final draft
- …