48 research outputs found
Throughput Maximization in Cloud Radio Access Networks using Network Coding
This paper is interested in maximizing the total throughput of cloud radio
access networks (CRANs) in which multiple radio remote heads (RRHs) are
connected to a central computing unit known as the cloud. The transmit frame of
each RRH consists of multiple radio resources blocks (RRBs), and the cloud is
responsible for synchronizing these RRBS and scheduling them to users. Unlike
previous works that consider allocating each RRB to only a single user at each
time instance, this paper proposes to mix the flows of multiple users in each
RRB using instantly decodable network coding (IDNC). The proposed scheme is
thus designed to jointly schedule the users to different RRBs, choose the
encoded file sent in each of them, and the rate at which each of them is
transmitted. Hence, the paper maximizes the throughput which is defined as the
number of correctly received bits. To jointly fulfill this objective, we design
a graph in which each vertex represents a possible user-RRB association,
encoded file, and transmission rate. By appropriately choosing the weights of
vertices, the scheduling problem is shown to be equivalent to a maximum weight
clique problem over the newly introduced graph. Simulation results illustrate
the significant gains of the proposed scheme compared to classical coding and
uncoded solutions.Comment: 7 pages, 7 figure
Instantly Decodable Network Coding: From Centralized to Device-to-Device Communications
From its introduction to its quindecennial, network coding has built a strong reputation for enhancing packet recovery and achieving maximum information flow in both wired and wireless networks. Traditional studies focused on optimizing the throughput of the system by proposing elaborate schemes able to reach the network capacity. With the shift toward distributed computing on mobile devices, performance and complexity become both critical factors that affect the efficiency of a coding strategy. Instantly decodable network coding presents itself as a new paradigm in network coding that trades off these two aspects. This paper review instantly decodable network coding schemes by identifying, categorizing, and evaluating various algorithms proposed in the literature. The first part of the manuscript investigates the conventional centralized systems, in which all decisions are carried out by a central unit, e.g., a base-station. In particular, two successful approaches known as the strict and generalized instantly decodable network are compared in terms of reliability, performance, complexity, and packet selection methodology. The second part considers the use of instantly decodable codes in a device-to-device communication network, in which devices speed up the recovery of the missing packets by exchanging network coded packets. Although the performance improvements are directly proportional to the computational complexity increases, numerous successful schemes from both the performance and complexity viewpoints are identified
Instantly Decodable Network Coding: From Point to Multi-Point to Device-to-Device Communications
The network coding paradigm enhances transmission efficiency by
combining information
flows and has drawn significant attention in information theory,
networking, communications
and data storage. Instantly decodable network coding (IDNC), a
subclass of network coding,
has demonstrated its ability to improve the quality of service of
time critical applications
thanks to its attractive properties, namely the throughput
enhancement, delay reduction,
simple XOR-based encoding and decoding, and small coefficient
overhead. Nonetheless, for
point to multi-point (PMP) networks, IDNC cannot guarantee the
decoding of a specific new
packet at individual devices in each transmission. Furthermore,
for device-to-device (D2D)
networks, the transmitting devices may possess only a subset of
packets, which can be used
to form coded packets. These challenges require the optimization
of IDNC algorithms to be
suitable for different application requirements and network
configurations.
In this thesis, we first study a scalable live video broadcast
over a wireless PMP network,
where the devices receive video packets from a base station. Such
layered live video has a
hard deadline and imposes a decoding order on the video layers.
We design two prioritized
IDNC algorithms that provide a high level of priority to the most
important video layer
before considering additional video layers in coding decisions.
These prioritized algorithms
are shown to increase the number of decoded video layers at the
devices compared to the
existing network coding schemes.
We then study video distribution over a partially connected D2D
network, where a group
of devices cooperate with each other to recover their missing
video content. We introduce
a cooperation aware IDNC graph that defines all feasible coding
and transmission conflictfree
decisions. Using this graph, we propose an IDNC solution that
avoids coding and
transmission conflicts, and meets the hard deadline for high
importance video packets. It is
demonstrated that the proposed solution delivers an improved
video quality to the devices
compared to the video and cooperation oblivious coding schemes.
We also consider a heterogeneous network wherein devices use two
wireless interfaces to
receive packets from the base station and another device
concurrently. For such network,
we are interested in applications with reliable in-order packet
delivery requirements. We
represent all feasible coding opportunities and conflict-free
transmissions using a dual interface
IDNC graph. We select a maximal independent set over the graph by
considering dual
interfaces of individual devices, in-order delivery requirements
of packets and lossy channel
conditions. This graph based solution is shown to reduce the
in-order delivery delay
compared to the existing network coding schemes.
Finally, we consider a D2D network with a group of devices
experiencing heterogeneous
channel capacities. For such cooperative scenarios, we address
the problem of minimizing
the completion time required for recovering all missing packets
at the devices using IDNC
and physical layer rate adaptation. Our proposed IDNC algorithm
balances between the
adopted transmission rate and the number of targeted devices that
can successfully receive
the transmitted packet. We show that the proposed rate aware IDNC
algorithm reduces the
completion time compared to the rate oblivious coding scheme
Design and analysis of network coding schemes for efficient fronthaul offloading of fog-radio access networks
In the era of the Internet of Things (IoT), everything will be connected. Smart homes and cities, connected cars, smart agriculture, wearable technologies, smart healthcare, smart sport, and fitness are all becoming a reality. However, the current cloud architecture cannot manage the tremendous number of connected devices and skyrocketing data traffic while providing the speeds promised by 5G and beyond. Centralised cloud data centres are physically too far from where the data originate (edge of the network), inevitably leading to data transmission speeds that are too slow for delay-sensitive applications. Thus, researchers have proposed fog architecture as a solution to the ever-increasing number of connected devices and data traffic.
The main idea of fog architecture is to bring content physically closer to end users, thus reducing data transmission times. This thesis considers a type of fog architecture in which smart end devices have storage and processing capabilities and can communicate and collaborate with each other. The major goal of this thesis is to develop methods of efficiently governing communication and collaboration between smart end devices so that their requests to upper network layers are minimised. This is achieved by incorporating principles from graph theory, network coding and machine learning to model the problem and design efficient network-coded scheduling algorithms to further enhance achieved performance. By maximising end users' self-sufficiency, the load on the system is decreased and its capacity increased. This will allow the central processing unit to manage more devices which is vital, given that more than 29 billion devices will connect to the infrastructure by 2023 \cite{Cisco1823}.
Specifically, given that the limitations of the smart end devices and the system in general lead to various communication conflicts, a novel network coding graph is developed that takes into account all possible conflicts and enables the search for an efficient feasible solution. The thesis designs heuristic algorithms that search for the solution over the novel network coding graph, investigates the complexity of the proposed algorithms, and shows the offloading strategy's asymptotic optimality.
Although the main aim of this work is to decrease the involvement of upper fog layers in serving smart end devices, it also takes into account how much energy end devices would use during collaborations. Unfortunately, a higher system capacity comes at the price of more energy spent by smart end devices; thus, service providers' interests and end users' interests are conflicting. Finally, this thesis investigates how multihop communication between end devices influences the offloading of upper fog layers. Smart end devices are equipped with machine learning capabilities that allow them to find efficient paths to their peers, further improving offloading.
In conclusion, the work in this thesis shows that by smartly designing and scheduling communication between end devices, it is possible to significantly reduce the load on the system, increase its capacity and achieve fast transmissions between end devices, allowing them to run latency-critical applications
Zero-padding Network Coding and Compressed Sensing for Optimized Packets Transmission
Ubiquitous Internet of Things (IoT) is destined to connect everybody and everything on a never-before-seen scale. Such networks, however, have to tackle the inherent issues created by the presence of very heterogeneous data transmissions over the same shared network. This very diverse communication, in turn, produces network packets of various sizes ranging from very small sensory readings to comparatively humongous video frames. Such a massive amount of data itself, as in the case of sensory networks, is also continuously captured at varying rates and contributes to increasing the load on the network itself, which could hinder transmission efficiency. However, they also open up possibilities to exploit various correlations in the transmitted data due to their sheer number. Reductions based on this also enable the networks to keep up with the new wave of big data-driven communications by simply investing in the promotion of select techniques that efficiently utilize the resources of the communication systems. One of the solutions to tackle the erroneous transmission of data employs linear coding techniques, which are ill-equipped to handle the processing of packets with differing sizes. Random Linear Network Coding (RLNC), for instance, generates unreasonable amounts of padding overhead to compensate for the different message lengths, thereby suppressing the pervasive benefits of the coding itself. We propose a set of approaches that overcome such issues, while also reducing the decoding delays at the same time. Specifically, we introduce and elaborate on the concept of macro-symbols and the design of different coding schemes. Due to the heterogeneity of the packet sizes, our progressive shortening scheme is the first RLNC-based approach that generates and recodes unequal-sized coded packets. Another of our solutions is deterministic shifting that reduces the overall number of transmitted packets. Moreover, the RaSOR scheme employs coding using XORing operations on shifted packets, without the need for coding coefficients, thus favoring linear encoding and decoding complexities.
Another facet of IoT applications can be found in sensory data known to be highly correlated, where compressed sensing is a potential approach to reduce the overall transmissions. In such scenarios, network coding can also help. Our proposed joint compressed sensing and real network coding design fully exploit the correlations in cluster-based wireless sensor networks, such as the ones advocated by Industry 4.0. This design focused on performing one-step decoding to reduce the computational complexities and delays of the reconstruction process at the receiver and investigates the effectiveness of combined compressed sensing and network coding