2,068 research outputs found
Algorithms for advance bandwidth reservation in media production networks
Media production generally requires many geographically distributed actors (e.g., production houses, broadcasters, advertisers) to exchange huge amounts of raw video and audio data. Traditional distribution techniques, such as dedicated point-to-point optical links, are highly inefficient in terms of installation time and cost. To improve efficiency, shared media production networks that connect all involved actors over a large geographical area, are currently being deployed. The traffic in such networks is often predictable, as the timing and bandwidth requirements of data transfers are generally known hours or even days in advance. As such, the use of advance bandwidth reservation (AR) can greatly increase resource utilization and cost efficiency. In this paper, we propose an Integer Linear Programming formulation of the bandwidth scheduling problem, which takes into account the specific characteristics of media production networks, is presented. Two novel optimization algorithms based on this model are thoroughly evaluated and compared by means of in-depth simulation results
Will SDN be part of 5G?
For many, this is no longer a valid question and the case is considered
settled with SDN/NFV (Software Defined Networking/Network Function
Virtualization) providing the inevitable innovation enablers solving many
outstanding management issues regarding 5G. However, given the monumental task
of softwarization of radio access network (RAN) while 5G is just around the
corner and some companies have started unveiling their 5G equipment already,
the concern is very realistic that we may only see some point solutions
involving SDN technology instead of a fully SDN-enabled RAN. This survey paper
identifies all important obstacles in the way and looks at the state of the art
of the relevant solutions. This survey is different from the previous surveys
on SDN-based RAN as it focuses on the salient problems and discusses solutions
proposed within and outside SDN literature. Our main focus is on fronthaul,
backward compatibility, supposedly disruptive nature of SDN deployment,
business cases and monetization of SDN related upgrades, latency of general
purpose processors (GPP), and additional security vulnerabilities,
softwarization brings along to the RAN. We have also provided a summary of the
architectural developments in SDN-based RAN landscape as not all work can be
covered under the focused issues. This paper provides a comprehensive survey on
the state of the art of SDN-based RAN and clearly points out the gaps in the
technology.Comment: 33 pages, 10 figure
Machine Learning Meets Communication Networks: Current Trends and Future Challenges
The growing network density and unprecedented increase in network traffic, caused by the massively expanding number of connected devices and online services, require intelligent network operations. Machine Learning (ML) has been applied in this regard in different types of networks and networking technologies to meet the requirements of future communicating devices and services. In this article, we provide a detailed account of current research on the application of ML in communication networks and shed light on future research challenges. Research on the application of ML in communication networks is described in: i) the three layers, i.e., physical, access, and network layers; and ii) novel computing and networking concepts such as Multi-access Edge Computing (MEC), Software Defined Networking (SDN), Network Functions Virtualization (NFV), and a brief overview of ML-based network security. Important future research challenges are identified and presented to help stir further research in key areas in this direction
Energy-Sustainable IoT Connectivity: Vision, Technological Enablers, Challenges, and Future Directions
Technology solutions must effectively balance economic growth, social equity,
and environmental integrity to achieve a sustainable society. Notably, although
the Internet of Things (IoT) paradigm constitutes a key sustainability enabler,
critical issues such as the increasing maintenance operations, energy
consumption, and manufacturing/disposal of IoT devices have long-term negative
economic, societal, and environmental impacts and must be efficiently
addressed. This calls for self-sustainable IoT ecosystems requiring minimal
external resources and intervention, effectively utilizing renewable energy
sources, and recycling materials whenever possible, thus encompassing energy
sustainability. In this work, we focus on energy-sustainable IoT during the
operation phase, although our discussions sometimes extend to other
sustainability aspects and IoT lifecycle phases. Specifically, we provide a
fresh look at energy-sustainable IoT and identify energy provision, transfer,
and energy efficiency as the three main energy-related processes whose
harmonious coexistence pushes toward realizing self-sustainable IoT systems.
Their main related technologies, recent advances, challenges, and research
directions are also discussed. Moreover, we overview relevant performance
metrics to assess the energy-sustainability potential of a certain technique,
technology, device, or network and list some target values for the next
generation of wireless systems. Overall, this paper offers insights that are
valuable for advancing sustainability goals for present and future generations.Comment: 25 figures, 12 tables, submitted to IEEE Open Journal of the
Communications Societ
Is Cloud RAN a Feasible Option for Industrial Communication Network?
Cloud RAN (C-RAN) is a promising paradigm for the next generation radio access network infrastructure, which offers centralised and coordinated base-band signal processing in a cloud-based BBU pool. This requires extremely low latency responses to achieve real-time signal processing. In this paper, we analysed the challenges to introduce cloud native model for signal processing in C-RAN. We studied the difficulties of achieving real-time processing in a cloud infrastructure by addressing its latency-constraint. To evaluate the performance of such a system, we mainly investigated a massive MIMO pilot scheduling process in a C-RAN infrastructure under a factory automation scenario. We considered the stochastic delays incurred by the cloud execution environment as the main constraint that has has impact on the scheduling performance. We use simulations to provide insights on the feasibility of C-RAN deployment for industrial communication, which has stringent criteria to meet Industry 4.0 standards under this constraint. Our experiment results show that, concerning a pilot scheduling problem, the CRAN system is capable of meeting the industrial criteria when the fronthaul and the cloud execution environment has introduced latency in the order of milliseconds
Fair Selection of Edge Nodes to Participate in Clustered Federated Multitask Learning
Clustered federated Multitask learning is introduced as an efficient
technique when data is unbalanced and distributed amongst clients in a
non-independent and identically distributed manner. While a similarity metric
can provide client groups with specialized models according to their data
distribution, this process can be time-consuming because the server needs to
capture all data distribution first from all clients to perform the correct
clustering. Due to resource and time constraints at the network edge, only a
fraction of devices {is} selected every round, necessitating the need for an
efficient scheduling technique to address these issues. Thus, this paper
introduces a two-phased client selection and scheduling approach to improve the
convergence speed while capturing all data distributions. This approach ensures
correct clustering and fairness between clients by leveraging bandwidth reuse
for participants spent a longer time training their models and exploiting the
heterogeneity in the devices to schedule the participants according to their
delay. The server then performs the clustering depending on predetermined
thresholds and stopping criteria. When a specified cluster approximates a
stopping point, the server employs a greedy selection for that cluster by
picking the devices with lower delay and better resources. The convergence
analysis is provided, showing the relationship between the proposed scheduling
approach and the convergence rate of the specialized models to obtain
convergence bounds under non-i.i.d. data distribution. We carry out extensive
simulations, and the results demonstrate that the proposed algorithms reduce
training time and improve the convergence speed while equipping every user with
a customized model tailored to its data distribution.Comment: To appear in IEEE Transactions on Network and Service Management,
Special issue on Federated Learning for the Management of Networked System
- …