4 research outputs found
Procedia Computer Science Flow-based Partitioning of Network Testbed Experiments
Abstract Understanding the behavior of large-scale systems is challenging, but essential when designing new Internet protocols and applications. It is often infeasible or undesirable to conduct experiments directly on the Internet. Thus, simulation, emulation, and testbed experiments are important techniques for researchers to investigate large-scale systems. In this paper, we propose a platform-independent mechanism to partition a large network experiment into a set of small experiments that are sequentially executed. Each of the small experiments can be conducted on a given number of experimental nodes, e.g., the available machines on a testbed. Results from the small experiments approximate the results that would have been obtained from the original large experiment. We model the original experiment using a flow dependency graph. We partition this graph, after pruning uncongested links, to obtain a set of small experiments. We execute the small experiments iteratively. Starting with the second iteration, we model dependent partitions using information gathered about both the traffic and the network conditions during the previous iteration. Experimental results from several simulation and testbed experiments demonstrate that our techniques approximate performance characteristics, even with closed-loop traffic and congested links. We expose the fundamental tradeoff between the simplicity of the partitioning and experimentation process, and the loss of experimental fidelity
Available Bandwidth Estimation Tools Metrics, Approaches and Performance
The estimation of the available bandwidth (av bw) between two end nodes through the Internet, is an area that has motivated researchers around the world in the last twenty years, to have faster and more accurate tools; Due to the utility it has in various network applications; Such as routing management, intrusion detection systems and the performance of transport protocols. Different tools use different estimation techniques but generally only analyze the three most used metrics as av bw, relative error and estimation time. This work expands the information regarding the evaluation literature of the current Available Bandwidth Estimation Tools (ABET’s), where they analyze the estimation techniques, metrics, different generation tools of cross-traf?c and evaluation testbed; Concentrating on the techniques and estimation methodologies used, as well as the challenges faced by open-source tools in high-performance networks of 10Gbps or higher
Available Bandwidth Estimation Tools Metrics, Approaches and Performance
The estimation of the available bandwidth (av_bw)
between two end nodes through the Internet, is an area that has
motivated researchers around the world in the last twenty years, to
have faster and more accurate tools; Due to the utility it has in
various network applications; Such as routing management,
intrusion detection systems and the performance of transport
protocols. Different tools use different estimation techniques but
generally only analyze the three most used metrics as av_bw,
relative error and estimation time. This work expands the
information regarding the evaluation literature of the current
Available Bandwidth Estimation Tools (ABET's), where they
analyze the estimation techniques, metrics, different generation
tools of cross-traffic and evaluation testbed; Concentrating on the
techniques and estimation methodologies used, as well as the
challenges faced by open-source tools in high-performance
networks of 10 Gbps or higher
Flow-based partitioning of network testbed experiments
Understanding the behavior of large-scale systems is challenging, but essential when designing new Internet protocols and applications. It is often infeasible or undesirable to conduct experiments directly on the Internet. Thus, simulation, emulation, and testbed experiments are important techniques for researchers to investigate large-scale systems.
In this paper, we propose a platform-independent mechanism to partition a large network experiment into a set of small experiments that are sequentially executed. Each of the small experiments can be conducted on a given number of experimental nodes, e.g., the available machines on a testbed. Results from the small experiments approximate the results that would have been obtained from the original large experiment. We model the original experiment using a flow dependency graph. We partition this graph, after pruning uncongested links, to obtain a set of small experiments. We execute the small experiments iteratively. Starting with the second iteration, we model dependent partitions using information gathered about both the traffic and the network conditions during the previous iteration. Experimental results from several simulation and testbed experiments demonstrate that our techniques approximate performance characteristics, even with closed-loop traffic and congested links. We expose the fundamental tradeoff between the simplicity of the partitioning and experimentation process, and the loss of experimental fidelity