807 research outputs found
Fronthaul evolution: From CPRI to Ethernet
It is proposed that using Ethernet in the fronthaul, between base station baseband unit (BBU) pools and remote radio heads (RRHs), can bring a number of advantages, from use of lower-cost equipment, shared use of infrastructure with fixed access networks, to obtaining statistical multiplexing and optimised performance through probe-based monitoring and software-defined networking. However, a number of challenges exist: ultra-high-bit-rate requirements from the transport of increased bandwidth radio streams for multiple antennas in future mobile networks, and low latency and jitter to meet delay requirements and the demands of joint processing. A new fronthaul functional division is proposed which can alleviate the most demanding bit-rate requirements by transport of baseband signals instead of sampled radio waveforms, and enable statistical multiplexing gains. Delay and synchronisation issues remain to be solved
Separation Framework: An Enabler for Cooperative and D2D Communication for Future 5G Networks
Soaring capacity and coverage demands dictate that future cellular networks
need to soon migrate towards ultra-dense networks. However, network
densification comes with a host of challenges that include compromised energy
efficiency, complex interference management, cumbersome mobility management,
burdensome signaling overheads and higher backhaul costs. Interestingly, most
of the problems, that beleaguer network densification, stem from legacy
networks' one common feature i.e., tight coupling between the control and data
planes regardless of their degree of heterogeneity and cell density.
Consequently, in wake of 5G, control and data planes separation architecture
(SARC) has recently been conceived as a promising paradigm that has potential
to address most of aforementioned challenges. In this article, we review
various proposals that have been presented in literature so far to enable SARC.
More specifically, we analyze how and to what degree various SARC proposals
address the four main challenges in network densification namely: energy
efficiency, system level capacity maximization, interference management and
mobility management. We then focus on two salient features of future cellular
networks that have not yet been adapted in legacy networks at wide scale and
thus remain a hallmark of 5G, i.e., coordinated multipoint (CoMP), and
device-to-device (D2D) communications. After providing necessary background on
CoMP and D2D, we analyze how SARC can particularly act as a major enabler for
CoMP and D2D in context of 5G. This article thus serves as both a tutorial as
well as an up to date survey on SARC, CoMP and D2D. Most importantly, the
article provides an extensive outlook of challenges and opportunities that lie
at the crossroads of these three mutually entangled emerging technologies.Comment: 28 pages, 11 figures, IEEE Communications Surveys & Tutorials 201
Network coding for transport protocols
With the proliferation of smart devices that require Internet connectivity anytime, anywhere, and the recent technological
advances that make it possible, current networked systems will have to provide a various range of services, such as content
distribution, in a wide range of settings, including wireless environments. Wireless links may experience temporary losses,
however, TCP, the de facto protocol for robust unicast communications, reacts by reducing the congestion window drastically
and injecting less traffic in the network. Consequently the wireless links are underutilized and the overall performance of the
TCP protocol in wireless environments is poor. As content delivery (i.e. multicasting) services, such as BBC iPlayer, become
popular, the network needs to support the reliable transport of the data at high rates, and with specific delay constraints. A
typical approach to deliver content in a scalable way is to rely on peer-to-peer technology (used by BitTorrent, Spotify and
PPLive), where users share their resources, including bandwidth, storage space, and processing power. Still, these systems
suffer from the lack of incentives for resource sharing and cooperation, and this problem is exacerbated in the presence of
heterogenous users, where a tit-for-tat scheme is difficult to implement.
Due to the issues highlighted above, current network architectures need to be changed in order to accommodate the usersÂż
demands for reliable and quality communications. In other words, the emergent need for advanced modes of information
transport requires revisiting and improving network components at various levels of the network stack.
The innovative paradigm of network coding has been shown as a promising technique to change the design of networked
systems, by providing a shift from how data flows traditionally move through the network. This shift implies that data flows are
no longer kept separate, according to the Âżstore-and-forwardÂż model, but they are also processed and mixed in the network. By
appropriately combining data by means of network coding, it is expected to obtain significant benefits in several areas of
network design and architecture.
In this thesis, we set out to show the benefits of including network coding into three communication paradigms, namely point-topoint
communications (e.g. unicast), point-to-multipoint communications (e.g. multicast), and multipoint-to-multipoint
communications (e.g. peer-to-peer networks). For the first direction, we propose a network coding-based multipath scheme and
show that TCP unicast sessions are feasible in highly volatile wireless environments. For point-to-multipoint communications,
we give an algorithm to optimally achieve all the rate pairs from the rate region in the case of degraded multicast over the
combination network. We also propose a system for live streaming that ensures reliability and quality of service to heterogenous
users, even if data transmissions occur over lossy wireless links. Finally, for multipoint-to-multipoint communications, we design
a system to provide incentives for live streaming in a peer-to-peer setting, where users have subscribed to different levels of
quality.
Our work shows that network coding enables a reliable transport of data, even in highly volatile environments, or in delay
sensitive scenarios such as live streaming, and facilitates the implementation of an efficient incentive system, even in the
presence of heterogenous users. Thus, network coding can solve the challenges faced by next generation networks
in order to support advanced information transport.Postprint (published version
Distributed multimedia systems
A distributed multimedia system (DMS) is an integrated communication, computing, and information system that enables the processing, management, delivery, and presentation of synchronized multimedia information with quality-of-service guarantees. Multimedia information may include discrete media data, such as text, data, and images, and continuous media data, such as video and audio. Such a system enhances human communications by exploiting both visual and aural senses and provides the ultimate flexibility in work and entertainment, allowing one to collaborate with remote participants, view movies on demand, access on-line digital libraries from the desktop, and so forth. In this paper, we present a technical survey of a DMS. We give an overview of distributed multimedia systems, examine the fundamental concept of digital media, identify the applications, and survey the important enabling technologies.published_or_final_versio
Deliverable JRA1.1: Evaluation of current network control and management planes for multi-domain network infrastructure
This deliverable includes a compilation and evaluation of available control and management architectures and protocols applicable to a multilayer infrastructure in a multi-domain Virtual Network environment.The scope of this deliverable is mainly focused on the virtualisation of the resources within a network and at processing nodes. The virtualization of the FEDERICA infrastructure allows the provisioning of its available resources to users by means of FEDERICA slices. A slice is seen by the user as a real physical network under his/her domain, however it maps to a logical partition (a virtual instance) of the physical FEDERICA resources. A slice is built to exhibit to the highest degree all the principles applicable to a physical network (isolation, reproducibility, manageability, ...). Currently, there are no standard definitions available for network virtualization or its associated architectures. Therefore, this deliverable proposes the Virtual Network layer architecture and evaluates a set of Management- and Control Planes that can be used for the partitioning and virtualization of the FEDERICA network resources. This evaluation has been performed taking into account an initial set of FEDERICA requirements; a possible extension of the selected tools will be evaluated in future deliverables. The studies described in this deliverable define the virtual architecture of the FEDERICA infrastructure. During this activity, the need has been recognised to establish a new set of basic definitions (taxonomy) for the building blocks that compose the so-called slice, i.e. the virtual network instantiation (which is virtual with regard to the abstracted view made of the building blocks of the FEDERICA infrastructure) and its architectural plane representation. These definitions will be established as a common nomenclature for the FEDERICA project. Other important aspects when defining a new architecture are the user requirements. It is crucial that the resulting architecture fits the demands that users may have. Since this deliverable has been produced at the same time as the contact process with users, made by the project activities related to the Use Case definitions, JRA1 has proposed a set of basic Use Cases to be considered as starting point for its internal studies. When researchers want to experiment with their developments, they need not only network resources on their slices, but also a slice of the processing resources. These processing slice resources are understood as virtual machine instances that users can use to make them behave as software routers or end nodes, on which to download the software protocols or applications they have produced and want to assess in a realistic environment. Hence, this deliverable also studies the APIs of several virtual machine management software products in order to identify which best suits FEDERICAâs needs.Postprint (published version
Fronthaul-Constrained Cloud Radio Access Networks: Insights and Challenges
As a promising paradigm for fifth generation (5G) wireless communication
systems, cloud radio access networks (C-RANs) have been shown to reduce both
capital and operating expenditures, as well as to provide high spectral
efficiency (SE) and energy efficiency (EE). The fronthaul in such networks,
defined as the transmission link between a baseband unit (BBU) and a remote
radio head (RRH), requires high capacity, but is often constrained. This
article comprehensively surveys recent advances in fronthaul-constrained
C-RANs, including system architectures and key techniques. In particular, key
techniques for alleviating the impact of constrained fronthaul on SE/EE and
quality of service for users, including compression and quantization,
large-scale coordinated processing and clustering, and resource allocation
optimization, are discussed. Open issues in terms of software-defined
networking, network function virtualization, and partial centralization are
also identified.Comment: 5 Figures, accepted by IEEE Wireless Communications. arXiv admin
note: text overlap with arXiv:1407.3855 by other author
Planning assistance for the NASA 30/20 GHz program. Network control architecture study.
Network Control Architecture for a 30/20 GHz flight experiment system operating in the Time Division Multiple Access (TDMA) was studied. Architecture development, identification of processing functions, and performance requirements for the Master Control Station (MCS), diversity trunking stations, and Customer Premises Service (CPS) stations are covered. Preliminary hardware and software processing requirements as well as budgetary cost estimates for the network control system are given. For the trunking system control, areas covered include on board SS-TDMA switch organization, frame structure, acquisition and synchronization, channel assignment, fade detection and adaptive power control, on board oscillator control, and terrestrial network timing. For the CPS control, they include on board processing and adaptive forward error correction control
On-board processing satellite network architecture and control study
The market for telecommunications services needs to be segmented into user classes having similar transmission requirements and hence similar network architectures. Use of the following transmission architecture was considered: satellite switched TDMA; TDMA up, TDM down; scanning (hopping) beam TDMA; FDMA up, TDM down; satellite switched MF/TDMA; and switching Hub earth stations with double hop transmission. A candidate network architecture will be selected that: comprises multiple access subnetworks optimized for each user; interconnects the subnetworks by means of a baseband processor; and optimizes the marriage of interconnection and access techniques. An overall network control architecture will be provided that will serve the needs of the baseband and satellite switched RF interconnected subnetworks. The results of the studies shall be used to identify elements of network architecture and control that require the greatest degree of technology development to realize an operational system. This will be specified in terms of: requirements of the enabling technology; difference from the current available technology; and estimate of the development requirements needed to achieve an operational system. The results obtained for each of these tasks are presented
- âŠ