380 research outputs found
Self-Modeling Based Diagnosis of Software-Defined Networks
Networks built using SDN (Software-Defined Networks) and NFV (Network
Functions Virtualization) approaches are expected to face several challenges
such as scalability, robustness and resiliency. In this paper, we propose a
self-modeling based diagnosis to enable resilient networks in the context of
SDN and NFV. We focus on solving two major problems: On the one hand, we lack
today of a model or template that describes the managed elements in the context
of SDN and NFV. On the other hand, the highly dynamic networks enabled by the
softwarisation require the generation at runtime of a diagnosis model from
which the root causes can be identified. In this paper, we propose finer
granular templates that do not only model network nodes but also their
sub-components for a more detailed diagnosis suitable in the SDN and NFV
context. In addition, we specify and validate a self-modeling based diagnosis
using Bayesian Networks. This approach differs from the state of the art in the
discovery of network and service dependencies at run-time and the building of
the diagnosis model of any SDN infrastructure using our templates
Engineering software for next-generation networks in a sustainable way.
The virtualization and softwarization of network functions is the networking industry's latest achievement. Software-Defined Networks (SDN) and Network Function Virtualization (NFV) propose novel software architectures and development process adapted to for instance mobile networks (e.g., 6G). However, these architectures and processes are mainly defined by the telecommunications community, without much regard for the contributions of software engineering to generic software processes. This paper explores how the fields of software engineering (SE) and telecommunications can work together to improve service virtualization, cloud computing, and edge computing in the context of next-generation networks. It also highlights the potential of SE fields like software architecture, variability, and configuration to greatly enhance the development of virtual network functions (VNFs). On the other hand, the new contributions should be energy efficient, since this is a primary goal in next-gen networks. Finally, current software processes should consider the impact of communication networks on the correct functioning of software products, since network functioning can affect the QoE of users.Work supported by the projects \emph{IRIS} PID2021-122812OB-I00 (co-financed by FEDER funds), and \emph{DAEMON} H2020-101017109; and by Universidad de MĂĄlaga
Towards 5G Software-Defined Ecosystems: Technical Challenges, Business Sustainability and Policy Issues
Techno-economic drivers are creating the conditions for a radical change of paradigm in the design and operation of future telecommunications infrastructures. In fact, SDN, NFV, Cloud and Edge-Fog Computing are converging together into a single systemic transformation termed âSoftwarizationâ that will find concrete exploitations in 5G systems. The IEEE SDN Initiative1 has elaborated a vision, an evolutionary path and some techno-economic scenarios of this transformation: specifically, the major technical challenges, business sustainability and policy issues have been investigated. This white paper presents: 1) an overview on the main techno-economic drivers steering the âSoftwarizationâ of telecommunications; 2) an introduction to the Open Mobile Edge Cloud vision (covered in a companion white paper); 3) the main technical challenges in terms of operations, security and policy; 4) an analysis of the potential role of open source software; 5) some use case proposals for proof-of-concepts; and 6) a short description of the main socio-economic impacts being produced by âSoftwarizationâ. Along these directions, IEEE SDN is also developing of an open catalogue of software platforms, toolkits, and functionalities aiming at a step-by-step development and aggregation of test-beds/field-trials on SDNNFV- 5G
Multicasting Over 6G Non-Terrestrial Networks: A Softwarization-Based Approach
Multicast/broadcast delivery is a critical challenge of future 6G mobile networks where massive Internet of Things (IoT) deployment and extended reality multimedia such as teleportation are target application scenarios. Non-terrestrial networks (NTNs) are considered essential for the success of 6G, which aims to provide true 'global' services by extending mobile access worldwide, thus overcoming the coverage limit of current terrestrial networks (TNs). This article discusses how the main distinguishing features of NTNs can be effectively exploited for 6G multicasting. Furthermore, in line with the evolution of future 6G networks toward softwarized systems, we evaluate the potential of using the softwarization paradigm in the heterogeneous TN-NTN architecture to deliver multicast services
Recommended from our members
Softwarized resource allocation in digital twins-empowered networks for future quantum-enabled consumer applications
Network softwarization (NetSoft), recognized as crucial attribute of 6G networks, promises to provide enhanced and advanced services, including future quantum-enabled consumer applications. Softwarized resource allocation is the core issue in NetSoft concept. Digital twins (DT) guarantees to generate the corresponding digital world that reflects and interacts with the original physical world seamlessly. With DT empowering, the digital replica of softwarized networks can be generated to predict, simulate, analyze the softwarized resource allocation in more economical, convenient and scalable methods.In this paper, we research the softwarized resource allocation of requested services, usually, called as slices, in DT-empowered networks for future quantum-enabled consumer applications. We focus on developing efficient softwarized resource allocation algorithm. At first, we present models of the DT-empowered networks and service requests by using graph theory and hypergraph theory. Then, we design one softwarized resource management framework, labeled as DT-Slice-Soft-6G. This framework has the functions of managing softwarized resources, calculating resource allocation solution in digital replica and sending the calculated solution back to softwarized 6G networks. Thereafter, one efficient and fine-grained softwarized resource allocation algorithm, inserted in DT-Slice-Soft-6G, is detailed. This algorithm is labeled as Heu-DT-Slice-6G and is proposed based on efficient heuristic methods. To validate the highlights of DT-Slice-Soft-6G and Heu-DT-Slice-6G, we conduct the simulation work in our self-developed simulator
Network slice selection in softwarization-based mobile networks
Recently, network slicing (NS) has been introduced as a key enabler to accommodate diversified services in network functions virtualizationâenabled softwareâdefined mobile networks. Although there has been some research work on network slice deployment and configuration, how user equipments select the most appropriate network slice is still an essential yet challenging issue, as slice selection may substantially affect the resource utilization and user Quality of Service (QoS). In this paper, we investigate the optimal selection of endâtoâend slices with the aim of improving network resources utilization while guaranteeing the QoS of users. We formulate the optimal slice selection problem as maximizing the users satisfaction degree and prove it is NPâhard. We thus resort to genetic algorithm (GA) to find a suboptimal solution and develop a GAâbased heuristic algorithm. The effectiveness of our proposed NS selection algorithm is validated via simulation experiments
Split Federated Learning for 6G Enabled-Networks: Requirements, Challenges and Future Directions
Sixth-generation (6G) networks anticipate intelligently supporting a wide
range of smart services and innovative applications. Such a context urges a
heavy usage of Machine Learning (ML) techniques, particularly Deep Learning
(DL), to foster innovation and ease the deployment of intelligent network
functions/operations, which are able to fulfill the various requirements of the
envisioned 6G services. Specifically, collaborative ML/DL consists of deploying
a set of distributed agents that collaboratively train learning models without
sharing their data, thus improving data privacy and reducing the
time/communication overhead. This work provides a comprehensive study on how
collaborative learning can be effectively deployed over 6G wireless networks.
In particular, our study focuses on Split Federated Learning (SFL), a technique
recently emerged promising better performance compared with existing
collaborative learning approaches. We first provide an overview of three
emerging collaborative learning paradigms, including federated learning, split
learning, and split federated learning, as well as of 6G networks along with
their main vision and timeline of key developments. We then highlight the need
for split federated learning towards the upcoming 6G networks in every aspect,
including 6G technologies (e.g., intelligent physical layer, intelligent edge
computing, zero-touch network management, intelligent resource management) and
6G use cases (e.g., smart grid 2.0, Industry 5.0, connected and autonomous
systems). Furthermore, we review existing datasets along with frameworks that
can help in implementing SFL for 6G networks. We finally identify key technical
challenges, open issues, and future research directions related to SFL-enabled
6G networks
Five Facets of 6G: Research Challenges and Opportunities
Whilst the fifth-generation (5G) systems are being rolled out across the
globe, researchers have turned their attention to the exploration of radical
next-generation solutions. At this early evolutionary stage we survey five main
research facets of this field, namely {\em Facet~1: next-generation
architectures, spectrum and services, Facet~2: next-generation networking,
Facet~3: Internet of Things (IoT), Facet~4: wireless positioning and sensing,
as well as Facet~5: applications of deep learning in 6G networks.} In this
paper, we have provided a critical appraisal of the literature of promising
techniques ranging from the associated architectures, networking, applications
as well as designs. We have portrayed a plethora of heterogeneous architectures
relying on cooperative hybrid networks supported by diverse access and
transmission mechanisms. The vulnerabilities of these techniques are also
addressed and carefully considered for highlighting the most of promising
future research directions. Additionally, we have listed a rich suite of
learning-driven optimization techniques. We conclude by observing the
evolutionary paradigm-shift that has taken place from pure single-component
bandwidth-efficiency, power-efficiency or delay-optimization towards
multi-component designs, as exemplified by the twin-component ultra-reliable
low-latency mode of the 5G system. We advocate a further evolutionary step
towards multi-component Pareto optimization, which requires the exploration of
the entire Pareto front of all optiomal solutions, where none of the components
of the objective function may be improved without degrading at least one of the
other components
- âŠ