51 research outputs found
Network overload avoidance by traffic engineering and content caching
The Internet traffic volume continues to grow at a great rate, now driven by video and TV distribution. For network operators it is important to avoid congestion in the network, and to meet service level agreements with their customers. This thesis presents work on two methods operators can use to reduce links loads in their networks: traffic engineering and content caching.
This thesis studies access patterns for TV and video and the potential for caching. The investigation is done both using simulation and by analysis of logs from a large TV-on-Demand system over four months.
The results show that there is a small set of programs that account for a large fraction of the requests and that a comparatively small local cache can be used to significantly reduce the peak link loads during prime time. The investigation also demonstrates how the popularity of programs changes over time and shows that the access pattern in a TV-on-Demand system very much depends on the content type.
For traffic engineering the objective is to avoid congestion in the network and to make better use of available resources by adapting the routing to the current traffic situation. The main challenge for traffic engineering in IP networks is to cope with the dynamics of Internet traffic demands.
This thesis proposes L-balanced routings that route the traffic on the shortest paths possible but make sure that no link is utilised to more than a given level L. L-balanced routing gives efficient routing of traffic and controlled spare capacity to handle unpredictable changes in traffic. We present an L-balanced routing algorithm and a heuristic search method for finding L-balanced weight settings for the legacy routing protocols OSPF and IS-IS. We show that the search and the resulting weight settings work well in real network scenarios
Calibration and Analysis of Enterprise and Edge Network Measurements
With the growth of the Internet over the past several decades, the field of Internet and network measurements has attracted the attention of many researchers. Doing the measurements has allowed a better understanding of the inner workings of both the global Internet and its specific parts. But undertaking a measurement study in a sound fashion is no easy task. Given the complexity of modern networks, one has to take great care in anticipating, detecting and eliminating all the measurement errors and biases.
In this thesis we pave the way for a more systematic calibration of network traces. Such calibration ensures the soundness and robustness of the analysis results by revealing and fixing flaws in the data. We collect our measurement data in two environments: in a medium-sized enterprise and at the Internet edge. For the former we perform two rounds of data collection from the enterprise switches. We use the differences in the way we recorded the network traces during the first and second rounds to develop and assess the methodology for five calibration aspects: measurement gain, measurement loss, measurement reordering, timing, and topology. For the dataset gathered at the Internet edge, we perform calibration in the form of extensive checks of data consistency and sanity.
After calibrating the data, we engage in the analysis of its various aspects. For the enterprise dataset we look at TCP dynamics in the enterprise environment. Here we first make a high- level overview of TCP connection characteristics such as termination status, size, duration, rate, etc. Then we assess the parameters important for TCP performance, such as retransmissions, out-of-order deliveries and channel utilization. Finally, using the Internet edge dataset, we gauge the performance characteristics of the edge connectivity
PABO: Mitigating Congestion via Packet Bounce in Data Center Networks
In today's data center, a diverse mix of throughput-sensitive long flows and
delay-sensitive short flows are commonly presented in shallow-buffered
switches. Long flows could potentially block the transmission of
delay-sensitive short flows, leading to degraded performance. Congestion can
also be caused by the synchronization of multiple TCP connections for short
flows, as typically seen in the partition/aggregate traffic pattern. While
multiple end-to-end transport-layer solutions have been proposed, none of them
have tackled the real challenge: reliable transmission in the network. In this
paper, we fill this gap by presenting PABO -- a novel link-layer design that
can mitigate congestion by temporarily bouncing packets to upstream switches.
PABO's design fulfills the following goals: i) providing per-flow based flow
control on the link layer, ii) handling transient congestion without the
intervention of end devices, and iii) gradually back propagating the congestion
signal to the source when the network is not capable to handle the
congestion.Experiment results show that PABO can provide prominent advantage of
mitigating transient congestions and can achieve significant gain on end-to-end
delay
Recommended from our members
Latency-driven performance in data centres
Data centre based cloud computing has revolutionised the way businesses use computing infrastructure. Instead of building their own data centres, companies rent computing resources
and deploy their applications on cloud hardware. Providing customers with well-defined application performance guarantees is of paramount importance to ensure transparency and to build
a lasting collaboration between users and cloud operators. A user’s application performance is
subject to the constraints of the resources it has been allocated and to the impact of the network
conditions in the data centre.
In this dissertation, I argue that application performance in data centres can be improved through
cluster scheduling of applications informed by predictions of application performance for given
network latency, and measurements of current network latency in data centres between hosts.
Firstly, I show how to use the Precision Time Protocol (PTP), through an open-source software
implementation PTPd, to measure network latency and packet loss in data centres. I propose
PTPmesh, which uses PTPd, as a cloud network monitoring tool for tenants. Furthermore, I
conduct a measurement study using PTPmesh in different cloud providers, finding that network
latency variability in data centres is still common. Normal latency values in data centres are
in the order of tens or hundreds of microseconds, while unexpected events, such as network
congestion or packet loss, can lead to latency spikes in the order of milliseconds.
Secondly, I show that network latency matters for certain distributed applications even in small
amounts of tens or hundreds of microseconds, significantly reducing their performance. I propose a methodology to determine the impact of network latency on distributed applications
performance by injecting artificial delay into the network of an experimental setup. Based on
the experimental results, I build functions that predict the performance of an application for a
given network latency.
Given the network latency variability observed in data centers, applications’ performance is
determined by their placement within the data centre. Thirdly, I propose latency-driven, application performance-aware, cluster scheduling as a way to provide performance guarantees
to applications. I introduce NoMora, a cluster scheduling architecture that leverages the predictions of application performance dependent upon network latency combined with dynamic
network latency measurements taken between pairs of hosts in data centres to place applications. Moreover, I show that NoMora improves application performance by choosing better
placements than other scheduling policies.MEASUREMENT FOR EUROPE: TRAINING AND RESEARCH FOR INTERNET COMMUNICATIONS SCIENCE, European Commission FP7 Marie Curie Innovative Training Networks (ITN)
ENDEAVOUR, European Commission Horizon 2020 (H2020) Industrial Leadership (IL
Smartphone traffic characteristics and context dependencies
Smartphone traffic contributes a considerable amount to Internet traffic. The increasing popularity of smartphones in recent reports suggests that smartphone traffic has been growing 10 times faster than traffic generated from fixed networks. However, little is known about the characteristics of smartphone traffic. A few recent studies have analyzed smartphone traffic and given some insight into its characteristics. However, many questions remain inadequately answered. This thesis analyzes traffic characteristics and explores some important issues related to smartphone traffic. An application on the Android platform was developed to capture network traffic. A user study was then conducted where 39 participants were given HTC Magic phones with data collection applications installed for 37 days. The collected data was analyzed to understand the workload characteristics of smartphone traffic and study the relationship between participant contexts and smartphone usage.
The collected dataset suggests that even in a small group of participants a variety of very different smartphone usage patterns occur. Participants accessed different types of Internet content at different times and under different circumstances. Differences between the usage of Wi-Fi and cellular networks for individual participants are observed. Download-intensive activities occurred more frequently over Wi-Fi networks.
Dependencies between smartphone usage and context (where they are, who they are with, at what time, and over which physical interface) are investigated in this work. Strong location dependencies on an aggregate and individual user level are found. Potential relationships between times of the day and access patterns are investigated. A time-of-day dependent access pattern is observed for some participants. Potential relationships between movement and proximity to other users and smartphone usage are also investigated. The collected data suggests that moving participants used map applications more. Participants generated more traffic and primarily downloaded apps when they were alone. The analyses performed in this thesis improve basic understanding and knowledge of smartphone use in different scenarios
Improved Worm Simulator and Simulations
According to the latest Microsoft Security Intelligence Report (SIR), worms were the second most prevalent information security threat detected in the first half of 2010 – the top threat being Trojans. Given the prevalence and damaging effects of worms, research and development of worm counter strategies are garnering an increased level of attention. However, it is extremely risky to test and observe worm spread behavior on a public network. What is needed is a packet level worm simulator that would allow researchers to develop and test counter strategies against rapidly spreading worms in a controlled and isolated environment. Jyotsna Krishnaswamy, a recent SJSU graduate student, successfully implemented a packet level worm simulator called the Wormulator. The Wormulator was specifically designed to simulate the behavior of the SQL Slammer worm. This project aims to improve the Wormulator by addressing some of its limitations. The resulting implementation will be called the Improved Worm Simulator
End-to-End Resilience Mechanisms for Network Transport Protocols
The universal reliance on and hence the need for resilience in network communications has been well established. Current transport protocols are designed to provide fixed mechanisms for error remediation (if any), using techniques such as ARQ, and offer little or no adaptability to underlying network conditions, or to different sets of application requirements. The ubiquitous TCP transport protocol makes too many assumptions about underlying layers to provide resilient end-to-end service in all network scenarios, especially those which include significant heterogeneity. Additionally the properties of reliability, performability, availability, dependability, and survivability are not explicitly addressed in the design, so there is no support for resilience. This dissertation presents considerations which must be taken in designing new resilience mechanisms for future transport protocols to meet service requirements in the face of various attacks and challenges. The primary mechanisms addressed include diverse end-to-end paths, and multi-mode operation for changing network conditions
Modelling and Design of Resilient Networks under Challenges
Communication networks, in particular the Internet, face a variety of challenges that can disrupt our daily lives resulting in the loss of human lives and significant financial costs in the worst cases. We define challenges as external events that trigger faults that eventually result in service failures. Understanding these challenges accordingly is essential for improvement of the current networks and for designing Future Internet architectures. This dissertation presents a taxonomy of challenges that can help evaluate design choices for the current and Future Internet. Graph models to analyse critical infrastructures are examined and a multilevel graph model is developed to study interdependencies between different networks. Furthermore, graph-theoretic heuristic optimisation algorithms are developed. These heuristic algorithms add links to increase the resilience of networks in the least costly manner and they are computationally less expensive than an exhaustive search algorithm. The performance of networks under random failures, targeted attacks, and correlated area-based challenges are evaluated by the challenge simulation module that we developed. The GpENI Future Internet testbed is used to conduct experiments to evaluate the performance of the heuristic algorithms developed
- …