2,528 research outputs found
The Road Ahead for Networking: A Survey on ICN-IP Coexistence Solutions
In recent years, the current Internet has experienced an unexpected paradigm
shift in the usage model, which has pushed researchers towards the design of
the Information-Centric Networking (ICN) paradigm as a possible replacement of
the existing architecture. Even though both Academia and Industry have
investigated the feasibility and effectiveness of ICN, achieving the complete
replacement of the Internet Protocol (IP) is a challenging task.
Some research groups have already addressed the coexistence by designing
their own architectures, but none of those is the final solution to move
towards the future Internet considering the unaltered state of the networking.
To design such architecture, the research community needs now a comprehensive
overview of the existing solutions that have so far addressed the coexistence.
The purpose of this paper is to reach this goal by providing the first
comprehensive survey and classification of the coexistence architectures
according to their features (i.e., deployment approach, deployment scenarios,
addressed coexistence requirements and architecture or technology used) and
evaluation parameters (i.e., challenges emerging during the deployment and the
runtime behaviour of an architecture). We believe that this paper will finally
fill the gap required for moving towards the design of the final coexistence
architecture.Comment: 23 pages, 16 figures, 3 table
Mobile Oriented Future Internet (MOFI)
This Special Issue consists of seven papers that discuss how to enhance mobility management and its associated performance in the mobile-oriented future Internet (MOFI) environment. The first two papers deal with the architectural design and experimentation of mobility management schemes, in which new schemes are proposed and real-world testbed experimentations are performed. The subsequent three papers focus on the use of software-defined networks (SDN) for effective service provisioning in the MOFI environment, together with real-world practices and testbed experimentations. The remaining two papers discuss the network engineering issues in newly emerging mobile networks, such as flying ad-hoc networks (FANET) and connected vehicular networks
Distributed top-k aggregation queries at large
Top-k query processing is a fundamental building block for efficient ranking in a large number of applications. Efficiency is a central issue, especially for distributed settings, when the data is spread across different nodes in a network. This paper introduces novel optimization methods for top-k aggregation queries in such distributed environments. The optimizations can be applied to all algorithms that fall into the frameworks of the prior TPUT and KLEE methods. The optimizations address three degrees of freedom: 1) hierarchically grouping input lists into top-k operator trees and optimizing the tree structure, 2) computing data-adaptive scan depths for different input sources, and 3) data-adaptive sampling of a small subset of input sources in scenarios with hundreds or thousands of query-relevant network nodes. All optimizations are based on a statistical cost model that utilizes local synopses, e.g., in the form of histograms, efficiently computed convolutions, and estimators based on order statistics. The paper presents comprehensive experiments, with three different real-life datasets and using the ns-2 network simulator for a packet-level simulation of a large Internet-style network
Hybrid SDN Evolution: A Comprehensive Survey of the State-of-the-Art
Software-Defined Networking (SDN) is an evolutionary networking paradigm
which has been adopted by large network and cloud providers, among which are
Tech Giants. However, embracing a new and futuristic paradigm as an alternative
to well-established and mature legacy networking paradigm requires a lot of
time along with considerable financial resources and technical expertise.
Consequently, many enterprises can not afford it. A compromise solution then is
a hybrid networking environment (a.k.a. Hybrid SDN (hSDN)) in which SDN
functionalities are leveraged while existing traditional network
infrastructures are acknowledged. Recently, hSDN has been seen as a viable
networking solution for a diverse range of businesses and organizations.
Accordingly, the body of literature on hSDN research has improved remarkably.
On this account, we present this paper as a comprehensive state-of-the-art
survey which expands upon hSDN from many different perspectives
Improving capacity-performance tradeoffs in the storage tier
Data-set sizes are growing. New techniques are emerging to organize and analyze these data-sets. There is a key access pattern emerging with these new techniques, large sequential file accesses. The trend toward bigger files exists to help amortize the cost of data accesses from the storage layer, as many workloads are recognized to be I/O bound. The storage layer is widely recognized as the slowest layer in the system. This work focuses on the tradeoff one can make with that storage capacity to improve system performance. ^ Capacity can be leveraged for improved availability or improved performance. This tradeoff is key in the storage layer, as this allows for data loss prevention and bandwidth aggregation. Typically these tradeoffs do not allow much choice with regard to capacity use. This work will leverage replication as the enabling mechanism to improve the capacity-performance tradeoff in the storage tier, while still providing for availability. ^ This capacity-performance tradeoff can be made at both the local and distributed file system level. I propose two techniques that allow for an improved tradeoff of capacity. The local file system can be employed on scale-out or scale-up infrastructures to improve performance. The distributed file system is targeted at distributed frameworks, such as MapReduce, to improve the cluster performance. The local file system design is MorphStore, and the distributed file system is BoostDFS. ^ MorphStore is a file system that significantly improves performance when accessing large files by using two innovations. MorphStore combines (a) load-adaptive I/O access scheduling to dynamically optimize throughput (aggregation), and (b) utility-xiii driven replication to best use capacity for performance. Additionally, adaptive-access scheduling can be utilized to optimize scheduling of requests (for throughput) on systems with a large number of storage devices. Replication is utilized to make available high utility files and then optimize throughput of these high utility files based on system load. ^ BoostDFS is a distributed file system that allows a better capacity-performance tradeoff via inter-node file replication. BoostDFS is built on the observation that distributed file systems currently inter-node replication for availability, but provide no mechanism to further improve performance. Replication for availability provides diminishing returns on performance, this is due to saturation of locality. BoostDFS exploits the common by improving I/O performance of these local tasks. This is done via intra-node replication by leveraging MorphStore as the local file system. This technique allows for capacity to be traded for availability as well as performance, with a small capacity overhead under constant availability. ^ Both MorphStore and BoostDFS utilize replication. Replication allows for both bandwidth aggregation and availability, This work primarily focuses on the performance utility of replication, but does not sacrifice availability in the process. These techniques provide an improved capacity-performance tradeoff while allowing the desired level of availability
Overview of Caching Mechanisms to Improve Hadoop Performance
Nowadays distributed computing environments, large amounts of data are
generated from different resources with a high velocity, rendering the data
difficult to capture, manage, and process within existing relational databases.
Hadoop is a tool to store and process large datasets in a parallel manner
across a cluster of machines in a distributed environment. Hadoop brings many
benefits like flexibility, scalability, and high fault tolerance; however, it
faces some challenges in terms of data access time, I/O operation, and
duplicate computations resulting in extra overhead, resource wastage, and poor
performance. Many researchers have utilized caching mechanisms to tackle these
challenges. For example, they have presented approaches to improve data access
time, enhance data locality rate, remove repetitive calculations, reduce the
number of I/O operations, decrease the job execution time, and increase
resource efficiency. In the current study, we provide a comprehensive overview
of caching strategies to improve Hadoop performance. Additionally, a novel
classification is introduced based on cache utilization. Using this
classification, we analyze the impact on Hadoop performance and discuss the
advantages and disadvantages of each group. Finally, a novel hybrid approach
called Hybrid Intelligent Cache (HIC) that combines the benefits of two methods
from different groups, H-SVM-LRU and CLQLMRS, is presented. Experimental
results show that our hybrid method achieves an average improvement of 31.2% in
job execution time
Recommended from our members
QOE-AWARE CONTENT DISTRIBUTION SYSTEMS FOR ADAPTIVE BITRATE VIDEO STREAMING
A prodigious increase in video streaming content along with a simultaneous rise in end system capabilities has led to the proliferation of adaptive bit rate video streaming users in the Internet. Today, video streaming services range from Video-on-Demand services like traditional IP TV to more recent technologies such as immersive 3D experiences for live sports events. In order to meet the demands of these services, the multimedia and networking research community continues to strive toward efficiently delivering high quality content across the Internet while also trying to minimize content storage and delivery costs.
The introduction of flexible and adaptable technologies such as compute and storage clouds, Network Function Virtualization and Software Defined Networking continue to fuel content provider revenue. Today, content providers such as Google and Facebook build their own Software-Defined WANs to efficiently serve millions of users worldwide, while NetFlix partners with ISPs such as ATT (using OpenConnect) and cloud providers such as Amazon EC2 to serve their content and manage the delivery of several petabytes of high-quality video content for millions of subscribers at a global scale, respectively. In recent years, the unprecedented growth of video traffic in the Internet has seen several innovative systems such as Software Defined Networks and Information Centric Networks as well as inventive protocols such as QUIC, in an effort to keep up with the effects of this remarkable growth. While most existing systems continue to sub-optimally satisfy user requirements, future video streaming systems will require optimal management of storage and bandwidth resources that are several orders of magnitude larger than what is implemented today. Moreover, Quality-of-Experience metrics are becoming increasingly fine-grained in order to accurately quantify diverse content and consumer needs.
In this dissertation, we design and investigate innovative adaptive bit rate video streaming systems and analyze the implications of recent technologies on traditional streaming approaches using real-world experimentation methods. We provide useful insights for current and future content distribution network administrators to tackle Quality-of-Experience dilemmas and serve high quality video content to several users at a global scale. In order to show how Quality-of-Experience can benefit from core network architectural modifications, we design and evaluate prototypes for video streaming in Information Centric Networks and Software-Defined Networks. We also present a real-world, in-depth analysis of adaptive bitrate video streaming over protocols such as QUIC and MPQUIC to show how end-to-end protocol innovation can contribute to substantial Quality-of-Experience benefits for adaptive bit rate video streaming systems. We investigate a cross-layer approach based on QUIC and observe that application layer-based information can be successfully used to determine transport layer parameters for ABR streaming applications
- …