11,435 research outputs found
Will SDN be part of 5G?
For many, this is no longer a valid question and the case is considered
settled with SDN/NFV (Software Defined Networking/Network Function
Virtualization) providing the inevitable innovation enablers solving many
outstanding management issues regarding 5G. However, given the monumental task
of softwarization of radio access network (RAN) while 5G is just around the
corner and some companies have started unveiling their 5G equipment already,
the concern is very realistic that we may only see some point solutions
involving SDN technology instead of a fully SDN-enabled RAN. This survey paper
identifies all important obstacles in the way and looks at the state of the art
of the relevant solutions. This survey is different from the previous surveys
on SDN-based RAN as it focuses on the salient problems and discusses solutions
proposed within and outside SDN literature. Our main focus is on fronthaul,
backward compatibility, supposedly disruptive nature of SDN deployment,
business cases and monetization of SDN related upgrades, latency of general
purpose processors (GPP), and additional security vulnerabilities,
softwarization brings along to the RAN. We have also provided a summary of the
architectural developments in SDN-based RAN landscape as not all work can be
covered under the focused issues. This paper provides a comprehensive survey on
the state of the art of SDN-based RAN and clearly points out the gaps in the
technology.Comment: 33 pages, 10 figure
Tars: Timeliness-aware Adaptive Replica Selection for Key-Value Stores
In current large-scale distributed key-value stores, a single end-user
request may lead to key-value access across tens or hundreds of servers. The
tail latency of these key-value accesses is crucial to the user experience and
greatly impacts the revenue. To cut the tail latency, it is crucial for clients
to choose the fastest replica server as much as possible for the service of
each key-value access. Aware of the challenges on the time varying performance
across servers and the herd behaviors, an adaptive replica selection scheme C3
is proposed recently. In C3, feedback from individual servers is brought into
replica ranking to reflect the time-varying performance of servers, and the
distributed rate control and backpressure mechanism is invented. Despite of
C3's good performance, we reveal the timeliness issue of C3, which has large
impacts on both the replica ranking and the rate control, and propose the Tars
(timeliness-aware adaptive replica selection) scheme. Following the same
framework as C3, Tars improves the replica ranking by taking the timeliness of
the feedback information into consideration, as well as revises the rate
control of C3. Simulation results confirm that Tars outperforms C3.Comment: 10pages,submitted to ICDCS 201
Diluting the Scalability Boundaries: Exploring the Use of Disaggregated Architectures for High-Level Network Data Analysis
Traditional data centers are designed with a rigid architecture of
fit-for-purpose servers that provision resources beyond the average workload in
order to deal with occasional peaks of data. Heterogeneous data centers are
pushing towards more cost-efficient architectures with better resource
provisioning. In this paper we study the feasibility of using disaggregated
architectures for intensive data applications, in contrast to the monolithic
approach of server-oriented architectures. Particularly, we have tested a
proactive network analysis system in which the workload demands are highly
variable. In the context of the dReDBox disaggregated architecture, the results
show that the overhead caused by using remote memory resources is significant,
between 66\% and 80\%, but we have also observed that the memory usage is one
order of magnitude higher for the stress case with respect to average
workloads. Therefore, dimensioning memory for the worst case in conventional
systems will result in a notable waste of resources. Finally, we found that,
for the selected use case, parallelism is limited by memory. Therefore, using a
disaggregated architecture will allow for increased parallelism, which, at the
same time, will mitigate the overhead caused by remote memory.Comment: 8 pages, 6 figures, 2 tables, 32 references. Pre-print. The paper
will be presented during the IEEE International Conference on High
Performance Computing and Communications in Bangkok, Thailand. 18 - 20
December, 2017. To be published in the conference proceeding
- …