1,134 research outputs found

    Towards delay-aware container-based Service Function Chaining in Fog Computing

    Get PDF
    Recently, the fifth-generation mobile network (5G) is getting significant attention. Empowered by Network Function Virtualization (NFV), 5G networks aim to support diverse services coming from different business verticals (e.g. Smart Cities, Automotive, etc). To fully leverage on NFV, services must be connected in a specific order forming a Service Function Chain (SFC). SFCs allow mobile operators to benefit from the high flexibility and low operational costs introduced by network softwarization. Additionally, Cloud computing is evolving towards a distributed paradigm called Fog Computing, which aims to provide a distributed cloud infrastructure by placing computational resources close to end-users. However, most SFC research only focuses on Multi-access Edge Computing (MEC) use cases where mobile operators aim to deploy services close to end-users. Bi-directional communication between Edges and Cloud are not considered in MEC, which in contrast is highly important in a Fog environment as in distributed anomaly detection services. Therefore, in this paper, we propose an SFC controller to optimize the placement of service chains in Fog environments, specifically tailored for Smart City use cases. Our approach has been validated on the Kubernetes platform, an open-source orchestrator for the automatic deployment of micro-services. Our SFC controller has been implemented as an extension to the scheduling features available in Kubernetes, enabling the efficient provisioning of container-based SFCs while optimizing resource allocation and reducing the end-to-end (E2E) latency. Results show that the proposed approach can lower the network latency up to 18% for the studied use case while conserving bandwidth when compared to the default scheduling mechanism

    Resource orchestration strategies with retrials for latency-sensitive network slicing over distributed telco clouds

    Get PDF
    The new radio technologies (i.e. 5G and beyond) will allow a new generation of innovative services operated by vertical industries (e.g. robotic cloud, autonomous vehicles, etc.) with more stringent QoS requirements, especially in terms of end-to-end latency. Other technological changes, such as Network Function Virtualization (NFV) and Software-Defined Networking (SDN), will bring unique service capabilities to networks by enabling flexible network slicing that can be tailored to the needs of vertical services. However, effective orchestration strategies need to be put in place to offer latency minimization while also maximizing resource utilization for telco providers to address vertical requirements and increase their revenue. Looking at this objective, this paper addresses a latency-sensitive orchestration problem by proposing different strategies for the coordinated selection of virtual resources (network, computational, and storage resources) in distributed DCs while meeting vertical requirements (e.g., bandwidth demand) for network slicing. Three orchestration strategies are presented to minimize latency or the blocking probability through effective resource utilization. To further reduce the slice request blocking, orchestration strategies also encompass a retrial mechanism applied to rejected slice requests. Regarding latency, two components were considered, namely processing and network latency. An extensive set of simulations was carried out over a wide and composite telco cloud infrastructure in which different types of data centers coexist characterized by a different network location, size, and processing capacity. The results compare the behavior of the strategies in addressing latency minimization and service request fulfillment, also considering the impact of the retrial mechanism.This work was supported in part by the Department of Excellence in Robotics and Artificial Intelligence by Ministero dell’Istruzione, dell’Università e della Ricerca (MIUR) to Scuola Superiore Sant’Anna, and in part by the Project 5GROWTH under Agreement 856709

    Allocation des ressources dans les environnements informatiques en périphérie des réseaux mobiles

    Get PDF
    Abstract: The evolution of information technology is increasing the diversity of connected devices and leading to the expansion of new application areas. These applications require ultra-low latency, which cannot be achieved by legacy cloud infrastructures given their distance from users. By placing resources closer to users, the recently developed edge computing paradigm aims to meet the needs of these applications. Edge computing is inspired by cloud computing and extends it to the edge of the network, in proximity to where the data is generated. This paradigm leverages the proximity between the processing infrastructure and the users to ensure ultra-low latency and high data throughput. The aim of this thesis is to improve resource allocation at the network edge to provide an improved quality of service and experience for low-latency applications. For better resource allocation, it is necessary to have reliable knowledge about the resources available at any moment. The first contribution of this thesis is to propose a resource representation to allow the supervisory xentity to acquire information about the resources available to each device. This information is then used by the resource allocation scheme to allocate resources appropriately for the different services. The resource allocation scheme is based on Lyapunov optimization, and it is executed only when resource allocation is required, which reduces the latency and resource consumption on each edge device. The second contribution of this thesis focuses on resource allocation for edge services. The services are created by chaining a set of virtual network functions. Resource allocation for services consists of finding an adequate placement for, routing, and scheduling these virtual network functions. We propose a solution based on game theory and machine learning to find a suitable location and routing for as well as an appropriate scheduling of these functions at the network edge. Finding the location and routing of network functions is formulated as a mean field game solved by iterative Ishikawa-Mann learning. In addition, the scheduling of the network functions on the different edge nodes is formulated as a matching set, which is solved using an improved version of the deferred acceleration algorithm we propose. The third contribution of this thesis is the resource allocation for vehicular services at the edge of the network. In this contribution, the services are migrated and moved to the different infrastructures at the edge to ensure service continuity. Vehicular services are particularly delay sensitive and related mainly to road safety and security. Therefore, the migration of vehicular services is a complex operation. We propose an approach based on deep reinforcement learning to proactively migrate the different services while ensuring their continuity under high mobility constraints.L'évolution des technologies de l'information entraîne la prolifération des dispositifs connectés qui mène à l'exploration de nouveaux champs d'application. Ces applications demandent une latence ultra-faible, qui ne peut être atteinte par les infrastructures en nuage traditionnelles étant donné la distance qui les sépare des utilisateurs. En rapprochant les ressources aux utilisateurs, le paradigme de l'informatique en périphérie, récemment apparu, vise à répondre aux besoins de ces applications. L’informatique en périphérie s'inspire de l’informatique en nuage, en l'étendant à la périphérie du réseau, à proximité de l'endroit où les données sont générées. Ce paradigme tire parti de la proximité entre l'infrastructure de traitement et les utilisateurs pour garantir une latence ultra-faible et un débit élevé des données. L'objectif de cette thèse est l'amélioration de l'allocation des ressources à la périphérie du réseau pour offrir une meilleure qualité de service et expérience pour les applications à faible latence. Pour une meilleure allocation des ressources, il est nécessaire d'avoir une bonne connaissance sur les ressources disponibles à tout moment. La première contribution de cette thèse consiste en la proposition d'une représentation des ressources pour permettre à l'entité de supervision d'acquérir des informations sur les ressources disponibles à chaque dispositif. Ces informations sont ensuite exploitées par le schéma d'allocation des ressources afin d'allouer les ressources de manière appropriée pour les différents services. Le schéma d'allocation des ressources est basé sur l'optimisation de Lyapunov, et il n'est exécuté que lorsque l'allocation des ressources est requise, ce qui réduit la latence et la consommation en ressources sur chaque équipement de périphérie. La deuxième contribution de cette thèse porte sur l'allocation des ressources pour les services en périphérie. Les services sont composés par le chaînage d'un ensemble de fonctions réseau virtuelles. L'allocation des ressources pour les services consiste en la recherche d'un placement, d'un routage et d'un ordonnancement adéquat de ces fonctions réseau virtuelles. Nous proposons une solution basée sur la théorie des jeux et sur l'apprentissage automatique pour trouver un emplacement et routage convenable ainsi qu'un ordonnancement approprié de ces fonctions en périphérie du réseau. La troisième contribution de cette thèse consiste en l'allocation des ressources pour les services véhiculaires en périphérie du réseau. Dans cette contribution, les services sont migrés et déplacés sur les différentes infrastructures en périphérie pour assurer la continuité des services. Les services véhiculaires sont en particulier sensibles à la latence et liés principalement à la sûreté et à la sécurité routière. En conséquence, la migration des services véhiculaires constitue une opération complexe. Nous proposons une approche basée sur l'apprentissage par renforcement profond pour migrer de manière proactive les différents services tout en assurant leur continuité sous les contraintes de mobilité élevée

    Enabling Scalable and Sustainable Softwarized 5G Environments

    Get PDF
    The fifth generation of telecommunication systems (5G) is foreseen to play a fundamental role in our socio-economic growth by supporting various and radically new vertical applications (such as Industry 4.0, eHealth, Smart Cities/Electrical Grids, to name a few), as a one-fits-all technology that is enabled by emerging softwarization solutions \u2013 specifically, the Fog, Multi-access Edge Computing (MEC), Network Functions Virtualization (NFV) and Software-Defined Networking (SDN) paradigms. Notwithstanding the notable potential of the aforementioned technologies, a number of open issues still need to be addressed to ensure their complete rollout. This thesis is particularly developed towards addressing the scalability and sustainability issues in softwarized 5G environments through contributions in three research axes: a) Infrastructure Modeling and Analytics, b) Network Slicing and Mobility Management, and c) Network/Services Management and Control. The main contributions include a model-based analytics approach for real-time workload profiling and estimation of network key performance indicators (KPIs) in NFV infrastructures (NFVIs), as well as a SDN-based multi-clustering approach to scale geo-distributed virtual tenant networks (VTNs) and to support seamless user/service mobility; building on these, solutions to the problems of resource consolidation, service migration, and load balancing are also developed in the context of 5G. All in all, this generally entails the adoption of Stochastic Models, Mathematical Programming, Queueing Theory, Graph Theory and Team Theory principles, in the context of Green Networking, NFV and SDN
    • …
    corecore