322 research outputs found
Fairness in a data center
Existing data centers utilize several networking technologies in order to handle the performance requirements of different workloads. Maintaining diverse networking technologies increases complexity and is not cost effective. This results in the current trend to converge all traffic into a single networking fabric. Ethernet is both cost-effective and ubiquitous, and as such it has been chosen as the technology of choice for the converged fabric. However, traditional Ethernet does not satisfy the needs of all traffic workloads, for the most part, due to its lossy nature and, therefore, has to be enhanced to allow for full convergence. The resulting technology, Data Center Bridging (DCB), is a new set of standards defined by the IEEE to make Ethernet lossless even in the presence of congestion. As with any new networking technology, it is critical to analyze how the different protocols within DCB interact with each other as well as how each protocol interacts with existing technologies in other layers of the protocol stack.
This dissertation presents two novel schemes that address critical issues in DCB networks: fairness with respect to packet lengths and fairness with respect to flow control and bandwidth utilization. The Deficit Round Robin with Adaptive Weight Control (DRR-AWC) algorithm actively monitors the incoming streams and adjusts the scheduling weights of the outbound port. The algorithm was implemented on a real DCB switch and shown to increase fairness for traffic consisting of mixed-length packets. Targeted Priority-based Flow Control (TPFC) provides a hop-by-hop flow control mechanism that restricts the flow of aggressor streams while allowing victim streams to continue unimpeded. Two variants of the targeting mechanism within TPFC are presented and their performance evaluated through simulation
ATP: a Datacenter Approximate Transmission Protocol
Many datacenter applications such as machine learning and streaming systems
do not need the complete set of data to perform their computation. Current
approximate applications in datacenters run on a reliable network layer like
TCP. To improve performance, they either let sender select a subset of data and
transmit them to the receiver or transmit all the data and let receiver drop
some of them. These approaches are network oblivious and unnecessarily transmit
more data, affecting both application runtime and network bandwidth usage. On
the other hand, running approximate application on a lossy network with UDP
cannot guarantee the accuracy of application computation. We propose to run
approximate applications on a lossy network and to allow packet loss in a
controlled manner. Specifically, we designed a new network protocol called
Approximate Transmission Protocol, or ATP, for datacenter approximate
applications. ATP opportunistically exploits available network bandwidth as
much as possible, while performing a loss-based rate control algorithm to avoid
bandwidth waste and re-transmission. It also ensures bandwidth fair sharing
across flows and improves accurate applications' performance by leaving more
switch buffer space to accurate flows. We evaluated ATP with both simulation
and real implementation using two macro-benchmarks and two real applications,
Apache Kafka and Flink. Our evaluation results show that ATP reduces
application runtime by 13.9% to 74.6% compared to a TCP-based solution that
drops packets at sender, and it improves accuracy by up to 94.0% compared to
UDP
Recommended from our members
Analytical Modelling of Scheduling Schemes under Self-similar Network Traffic. Traffic Modelling and Performance Analysis of Centralized and Distributed Scheduling Schemes.
High-speed transmission over contemporary communication networks has
drawn many research efforts. Traffic scheduling schemes which play a critical role in
managing network transmission have been pervasively studied and widely
implemented in various practical communication networks. In a sophisticated
communication system, a variety of applications co-exist and require differentiated
Quality-of-Service (QoS). Innovative scheduling schemes and hybrid scheduling
disciplines which integrate multiple traditional scheduling mechanisms have
emerged for QoS differentiation. This study aims to develop novel analytical models
for commonly interested scheduling schemes in communication systems under more
realistic network traffic and use the models to investigate the issues of design and
development of traffic scheduling schemes.
In the open literature, it is commonly recognized that network traffic exhibits
self-similar nature, which has serious impact on the performance of communication
networks and protocols. To have a deep study of self-similar traffic, the real-world
traffic datasets are measured and evaluated in this study. The results reveal that selfsimilar
traffic is a ubiquitous phenomenon in high-speed communication networks
and highlight the importance of the developed analytical models under self-similar
traffic.
The original analytical models are then developed for the centralized
scheduling schemes including the Deficit Round Robin, the hybrid PQGPS which
integrates the traditional Priority Queueing (PQ) and Generalized Processor Sharing (GPS) schemes, and the Automatic Repeat reQuest (ARQ) forward error control
discipline in the presence of self-similar traffic.
Most recently, research on the innovative Cognitive Radio (CR) techniques
in wireless networks is popular. However, most of the existing analytical models still
employ the traditional Poisson traffic to examine the performance of CR involved
systems. In addition, few studies have been reported for estimating the residual
service left by primary users. Instead, extensive existing studies use an ON/OFF
source to model the residual service regardless of the primary traffic. In this thesis, a PQ theory is adopted to investigate and model the possible service left by selfsimilar
primary traffic and derive the queue length distribution of individual
secondary users under the distributed spectrum random access protocol
Defending against low-rate TCP attack: dynamic detection and protection.
Sun Haibin.Thesis (M.Phil.)--Chinese University of Hong Kong, 2005.Includes bibliographical references (leaves 89-96).Abstracts in English and Chinese.Abstract --- p.iChinese Abstract --- p.iiiAcknowledgement --- p.ivChapter 1 --- Introduction --- p.1Chapter 2 --- Background Study and Related Work --- p.5Chapter 2.1 --- Victim Exhaustion DoS/DDoS Attacks --- p.6Chapter 2.1.1 --- Direct DoS/DDoS Attacks --- p.7Chapter 2.1.2 --- Reflector DoS/DDoS Attacks --- p.8Chapter 2.1.3 --- Spoofed Packet Filtering --- p.9Chapter 2.1.4 --- IP Traceback --- p.13Chapter 2.1.5 --- Location Hiding --- p.20Chapter 2.2 --- QoS Based DoS Attacks --- p.22Chapter 2.2.1 --- Introduction to the QoS Based DoS Attacks --- p.22Chapter 2.2.2 --- Countermeasures to the QoS Based DoS Attacks --- p.22Chapter 2.3 --- Worm based DoS Attacks --- p.24Chapter 2.3.1 --- Introduction to the Worm based DoS Attacks --- p.24Chapter 2.3.2 --- Countermeasures to the Worm Based DoS Attacks --- p.24Chapter 2.4 --- Low-rate TCP Attack and RoQ Attacks --- p.26Chapter 2.4.1 --- General Introduction of Low-rate Attack --- p.26Chapter 2.4.2 --- Introduction of RoQ Attack --- p.27Chapter 3 --- Formal Description of Low-rate TCP Attacks --- p.28Chapter 3.1 --- Mathematical Model of Low-rate TCP Attacks --- p.28Chapter 3 2 --- Other forms of Low-rate TCP Attacks --- p.31Chapter 4 --- Distributed Detection Mechanism --- p.34Chapter 4.1 --- General Consideration of Distributed Detection . --- p.34Chapter 4.2 --- Design of Low-rate Attack Detection Algorithm . --- p.36Chapter 4.3 --- Statistical Sampling of Incoming Traffic --- p.37Chapter 4.4 --- Noise Filtering --- p.38Chapter 4.5 --- Feature Extraction --- p.39Chapter 4.6 --- Pattern Matching via the Dynamic Time Warping (DTW) Method --- p.41Chapter 4.7 --- Robustness and Accuracy of DTW --- p.45Chapter 4.7.1 --- DTW values for low-rate attack: --- p.46Chapter 4.7.2 --- DTW values for legitimate traffic (Gaussian): --- p.47Chapter 4.7.3 --- DTW values for legitimate traffic (Self-similar): --- p.48Chapter 5 --- Low-Rate Attack Defense Mechanism --- p.52Chapter 5.1 --- Design of Defense Mechanism --- p.52Chapter 5.2 --- Analysis of Deficit Round Robin Algorithm --- p.54Chapter 6 --- Fluid Model of TCP Flows --- p.56Chapter 6.1 --- Fluid Math. Model of TCP under DRR --- p.56Chapter 6.1.1 --- Model of TCP on a Droptail Router --- p.56Chapter 6.1.2 --- Model of TCP on a DRR Router --- p.60Chapter 6.2 --- Simulation of TCP Fluid Model --- p.62Chapter 6.2.1 --- Simulation of Attack with Single TCP Flow --- p.62Chapter 6.2.2 --- Simulation of Attack with Multiple TCP flows --- p.64Chapter 7 --- Experiments --- p.69Chapter 7.1 --- Experiment 1 (Single TCP flow vs. single source attack) --- p.69Chapter 7.2 --- Experiment 2 (Multiple TCP flows vs. single source attack) --- p.72Chapter 7.3 --- Experiment 3 (Multiple TCP flows vs. synchro- nized distributed low-rate attack) --- p.74Chapter 7.4 --- Experiment 4 (Network model of low-rate attack vs. Multiple TCP flows) --- p.77Chapter 8 --- Conclusion --- p.83Chapter A --- Lemmas and Theorem Derivation --- p.85Bibliography --- p.8
Delivering Consistent Network Performance in Multi-tenant Data Centers
Data centers are growing rapidly in size and have recently begun acquiring a new role as cloud hosting platforms, allowing outside developers to deploy their own applications on large scales. As a result, today\u27s data centers are multi-tenant environments that host an increasingly diverse set of applications, many of which have very demanding networking requirements. This has prompted research into new data center architectures that offer increased capacity by using topologies that introduce multiple paths between servers. To achieve consistent network performance in these networks, traffic must be effectively load balanced among the available paths. In addition, some form of system-wide traffic regulation is necessary to provide performance guarantees to tenants.
To address these issues, this thesis introduces several software-based mechanisms that were inspired by techniques used to regulate traffic in the interconnects of scalable Internet routers. In particular, we borrow two key concepts that serve as the basis for our approach. First, we investigate packet-level routing techniques that are similar to those used to balance load effectively in routers. This work is novel in the data center context because most existing approaches route traffic at the level of flows to prevent their packets from arriving out-of-order. We show that routing at the packet-level allows for far more efficient use of the network\u27s resources and we provide a novel resequencing scheme to deal with out-of-order arrivals.
Secondly, we introduce distributed scheduling as a means to engineer traffic in data centers. In routers, distributed scheduling controls the rates between ports on different line cards enabling traffic to move efficiently through the interconnect. We apply the same basic idea to schedule rates between servers in the data center. We show that scheduling can prevent congestion from occurring and can be used as a flexible mechanism to support network performance guarantees for tenants. In contrast to previous work, which relied on centralized controllers to schedule traffic, our approach is fully distributed and we provide a novel distributed algorithm to control rates. In addition, we introduce an optimization problem called backlog scheduling to study scheduling strategies that facilitate more efficient application execution
A hybrid rate control mechanism for forwarding and congestion control in named data network
Named Data Networking (NDN) is an emerging Internet architecture that employs a pull-based, in-path caching, hop-by-hop, and multi-path transport architecture. Therefore, transport algorithms which use conventional paradigms would not work correctly in the NDN environment, since the content source location frequently changes. These changes raise forwarding and congestion control problems, and they directly affect the link utilization, fairness, and stability of the network. This study proposes a Hybrid Rate Control Mechanism (HRCM) to control the forwarding rate and link congestion to enhance network scalability, stability, and fairness performance. HRCM consists of three schemes namely Shaping Deficit Weight Round Robin (SDWRR), Queue-delay Parallel Multipath (QPM), and Explicit Control Agile-based conservative window adaptation (EC-Agile). The SDWRR scheme is scheduling different flows in router interfaces by fairly detecting and notifying the link congestion. The QPM scheme has been designed to forward Interest packets to all available paths that utilize idle bandwidths. The EC-Agile scheme controls forwarding rates by examining each packet received. The proposed HRCM was evaluated by comparing it with two different mechanisms, namely Practical Congestion Control (PCON) and Hop-by-hop Interest Shaping (HIS) through ndnSIM simulation. The findings show that HRCM enhances the forwarding rate and fairness. HRCM outperforms HIS and PCON in terms of throughput by 75%, delay 20%, queue length 55%, link utilization 41%, fairness 20%, and download time 20%. The proposed HRCM contributes to providing an enhanced forwarding rate and fairness in NDN with different types of traffic flow. Thus, the SDWRR, QPM, and EC-Agile schemes can be used in monitoring, controlling, and managing congestion and forwarding for the Internet of the future
Security related self-protected networks: autonomous threat detection and response (ATDR)
Doctor EducationisCybersecurity defense tools, techniques and methodologies are constantly faced with increasing
challenges including the evolution of highly intelligent and powerful new generation threats. The
main challenges posed by these modern digital multi-vector attacks is their ability to adapt with
machine learning. Research shows that many existing defense systems fail to provide adequate
protection against these latest threats. Hence, there is an ever-growing need for self-learning technologies that can autonomously adjust according to the behaviour and patterns of the offensive
actors and systems. The accuracy and effectiveness of existing methods are dependent on decision
making and manual input by human expert. This dependence causes 1) administration overhead,
2) variable and potentially limited accuracy and 3) delayed response time.
In this thesis, Autonomous Threat Detection and Response (ATDR) is a proposed general method
aimed at contributing toward security related self-protected networks. Through a combination
of unsupervised machine learning and Deep learning, ATDR is designed as an intelligent and
autonomous decision-making system that uses big data processing requirements and data frame
pattern identification layers to learn sequences of patterns and derive real-time data formations.
This system enhances threat detection and response capabilities, accuracy and speed. Research
provided a solid foundation for the proposed method around the scope of existing methods and
the unanimous problem statements and findings by other authors
Security related self-protected networks: Autonomous threat detection and response (ATDR)
>Magister Scientiae - MScCybersecurity defense tools, techniques and methodologies are constantly faced with increasing
challenges including the evolution of highly intelligent and powerful new-generation threats. The
main challenges posed by these modern digital multi-vector attacks is their ability to adapt with
machine learning. Research shows that many existing defense systems fail to provide adequate
protection against these latest threats. Hence, there is an ever-growing need for self-learning technologies
that can autonomously adjust according to the behaviour and patterns of the offensive
actors and systems. The accuracy and effectiveness of existing methods are dependent on decision
making and manual input by human experts. This dependence causes 1) administration
overhead, 2) variable and potentially limited accuracy and 3) delayed response time
Evolution of High Throughput Satellite Systems: Vision, Requirements, and Key Technologies
High throughput satellites (HTS), with their digital payload technology, are
expected to play a key role as enablers of the upcoming 6G networks. HTS are
mainly designed to provide higher data rates and capacities. Fueled by
technological advancements including beamforming, advanced modulation
techniques, reconfigurable phased array technologies, and electronically
steerable antennas, HTS have emerged as a fundamental component for future
network generation. This paper offers a comprehensive state-of-the-art of HTS
systems, with a focus on standardization, patents, channel multiple access
techniques, routing, load balancing, and the role of software-defined
networking (SDN). In addition, we provide a vision for next-satellite systems
that we named as extremely-HTS (EHTS) toward autonomous satellites supported by
the main requirements and key technologies expected for these systems. The EHTS
system will be designed such that it maximizes spectrum reuse and data rates,
and flexibly steers the capacity to satisfy user demand. We introduce a novel
architecture for future regenerative payloads while summarizing the challenges
imposed by this architecture
On distributed ledger technology for the internet of things: design and applications
Distributed ledger technology (DLT) can used to store information in such a way that no individual or organisation can compromise its veracity, contrary to a traditional centralised ledger. This nascent technology has received a great deal of attention from both researchers and practitioners in recent years due to the vast array of open questions related to its design and the assortment novel applications it unlocks. In this thesis, we are especially interested in the design of DLTs suitable for application in the domain of the internet of things (IoT), where factors such as efficiency, performance and scalability are of paramount importance. This work confronts the challenges of designing IoT-oriented distributed ledgers through analysis of ledger properties, development of design tools and the design of a number of core protocol components. We begin by introducing a class of DLTs whose data structures consist of directed acyclic graphs (DAGs) and which possess properties that make them particularly well suited to IoT applications. With a focus on the DAG structure, we then present analysis through mathematical modelling and simulations which provides new insights to the properties of this class of ledgers and allows us to propose novel security enhancements. Next, we shift our focus away from the DAG structure itself to another open problem for DAG-based distributed ledgers, that of access control. Specifically, we present a networking approach which removes the need for an expensive and inefficient mechanism known as Proof of Work, solving an open problem for IoT-oriented distributed ledgers. We then draw upon our analysis of the DAG structure to integrate and test our new access control with other core components of the DLT. Finally, we present a mechanism for orchestrating the interaction between users of a DLT and its operators, seeking to improves the usability of DLTs for IoT applications. In the appendix, we present two projects also carried out during this PhD which showcase applications of this technology in the IoT domain.Open Acces
- …