11 research outputs found

    A Survey on Emulation Testbeds for Mobile Ad-hoc Networks

    Get PDF
    AbstractMobile Ad hoc Network (MANET) can be said as a collection of mobile nodes, which builds a dynamic topology and a A resource constrained network. In this paper, we present a survey of various testbeds for Mobile Ad hoc Networks. Emulator provides environment without modifications to the software and validates software solutions for ad hoc network. A field test will show rather the simulation work is going on right track or not and going from the simulator to the real thing directly to analyze the performance and compare the results of routing protocols and mobility models. Analyzing and choosing an appropriate emulator according to the given environment is a time-consuming process. We contribute a survey of emulation testbeds for the choice of appropriate research tools in the mobile ad hoc networks

    Parallel and Distributed Immersive Real-Time Simulation of Large-Scale Networks

    Get PDF

    Understanding link-level 802.11 behavior: replacing convention with measurement

    Full text link

    Benchmarking in Wireless Networks

    Get PDF
    Experimentation is evolving as a viable and realistic performance analysis approach in wireless networking research. Realism is provisioned by deploying real software (network stack, drivers, OS), and hardware (wireless cards, network equipment, etc.) in the actual physical environment. However, the experimenter is more likely to be dogged by tricky issues because of calibration problems and bugs in the software/hardware tools. This, coupled with difficulty of dealing with multitude of controllable and uncontrollable hardware/software parameters and unpredictable characteristics of the wireless channel in the wild, poses significant challenges in the way of experiments repeatability and reproducibility. Furthermore, experimentation has been impeded by the lack of standard definitions, measurement methodologies and full disclosure reports that are particularly important to understand the suitability of protocols and services to emerging wireless application scenarios. Lack of tools to manage experiments, large amount of data and facilitate reproducible analysis further complicates the process. In this report, we present a holistic view of benchmarking in wireless networks; introduce key definitions and formulate a procedure complemented by step-by-step case study to help drive future efforts on benchmarking of wireless network applications and protocols

    Repeatable and realistic wireless experimentation through physical emulation

    No full text

    Repeatable and Realistic Wireless Experimentation through Physical Emulation

    No full text
    In wireless networking research, there has long existed a fundamental tension between experimental realism on one hand, and control and repeatability on the other hand. Hardware-based experimentation provides realism, but is tightly coupled to the physical environment and circumstances under which experiments are carried out. To overcome this, researchers have understandably embraced simulation as a means of evaluation. Unfortunately, wireless simulation is plagued with inherent inaccuracies. To overcome the stark tradeoff between the realism of hardware-based experimentation and the repeatability of simulation-based experimentation, we are developing a wireless emulator that enables both realistic and repeatable experimentation. Unlike previous emulators, our approach simultaneously achieves both a high degree of realism and fine-grained repeatability by leveraging physical layer emulation

    T-SIMn: Towards a Framework for the Trace-Based Simulation of 802.11n Networks

    Get PDF
    With billions of WiFi devices now in use, and growing, combined with the rising popularity of high-bandwidth applications, such as streaming video, demands on WiFi networks continue to rise. To increase performance for end users the 802.11n WiFi standard introduces several new features that increase Physical Layer Data Rates (PLDRs). However, the rates are less robust (i.e., more prone error). Optimizing throughput in an 802.11n network requires choosing the combination of features that results in the greatest balance between PLDRs and error rates, which is highly dependent on the environmental conditions. While the faster PLDRs are an important factor in the throughput gains afforded by 802.11n, it is only when they are used in combination with the new MAC layer features, namely Frame Aggregation (FA) and Block Acknowledgements (BAs), that 802.11n achieves significant gains when compared to the older 802.11g standard. FA allows multiple frames to be combined into a large frame so that they can be transmitted and acknowledged as one aggregated packet, which results in the channel being used more efficiently. Unfortunately, it is challenging to experimentally evaluate and compare the performance of WiFi networks using different combinations of 802.11n features. WiFi networks operate in 2.4 and 5 GHz bands, which are shared by WiFi devices, included in computers, cell phones and tablets; as well as Bluetooth devices, wireless keyboards/mice, cordless phones, microwave ovens and many others. Competition for the shared medium can negatively impact throughput by increasing transmission delays or error rates. This makes it difficult to perform repeatable experiments that are representative of the conditions in which WiFi devices are typically used. Therefore, we need new methodologies for understanding and evaluating how to best use these new 802.11n features. An existing trace-based simulation framework, called T-RATE, has been shown to be an accurate alternative to experimentally evaluating throughput in 802.11g networks. We propose T-SIMn, an extension of the T-RATE framework that includes support for the newer 802.11n WiFi standard. In particular, we implement a new 802.11n network simulator, which we call SIMn. Furthermore, we develop a new implementation of the trace collection phase that incorporates FA. We demonstrate that SIMn accurately simulates throughput for one, two and three-antenna PLDRs in 802.11n with FA. We also show that SIMn accurately simulates delay due to WiFi and non-WiFi interference, as well as error due to path loss in mobile scenarios. Finally, we evaluate the T-SIMn framework (including trace collection) by collecting traces using an iPhone. The iPhone is representative of a wide variety of one antenna devices. We find that our framework can be used to accurately simulate these scenarios and we demonstrate the fidelity of SIMn by uncovering problems with our initial evaluation methodology. We expect that the T-SIMn framework will be suitable for easily and fairly evaluating rate adaptation, frame aggregation and channel bandwidth adaptation algorithms for 802.11n networks, which are challenging to evaluate experimentally

    Investigating TCP performance in mobile ad hoc networks

    Get PDF
    Mobile ad hoc networks (MANETs) have become increasingly important in view of their promise of ubiquitous connectivity beyond traditional fixed infrastructure networks. Such networks, consisting of potentially highly mobile nodes, have provided new challenges by introducing special consideration stemming from the unique characteristics of the wireless medium and the dynamic nature of the network topology. The TCP protocol, which has been widely deployed on a multitude of internetworks including the Internet, is naturally viewed as the de facto reliable transport protocol for use in MANETs. However, assumptions made at TCP’s inception reflected characteristics of the prevalent wired infrastructure of networks at the time and could subsequently lead to sub-optimal performance when used in wireless ad hoc environments. The basic presupposition underlying TCP congestion control is that packet losses are predominantly an indication of congestion in the network. The detrimental effect of such an assumption on TCP’s performance in MANET environments has been a long-standing research problem. Hence, previous work has focused on addressing the ambiguity behind the cause of packet loss as perceived by TCP by proposing changes at various levels across the network protocol stack, such as at the MAC mechanism of the transceiver or via coupling with the routing protocol at the network layer. The main challenge addressed by the current work is to propose new methods to ameliorate the illness-effects of TCP’s misinterpretation of the causes of packet loss in MANETs. An assumed restriction on any proposed modifications is that resulting performance increases should be achievable by introducing limited changes confined to the transport layer. Such a restriction aids incremental adoption and ease of deployment by requiring minimal implementation effort. Further, the issue of packet loss ambiguity, from a transport layer perspective, has, by definition, to be dealt with in an end-to-end fashion. As such, a proposed solution may involve implementation at the sender, the receiver or both to address TCP shortcomings. Some attempts at describing TCP behaviour in MANETs have been previously reported in the literature. However, a thorough enquiry into the performance of those TCP agents popular in terms of research and adoption has been lacking. Specifically, very little work has been performed on an exhaustive analysis of TCP variants across different MANET routing protocols and under various mobility conditions. The first part of the dissertation addresses this shortcoming through extensive simulation evaluation in order to ascertain the relative performance merits of each TCP variant in terms of achieved goodput over dynamic topologies. Careful examination reveals sub-par performance of TCP Reno, the largely equivalent performance of NewReno and SACK, whilst the effectiveness of a proactive TCP variant (Vegas) is explicitly stated and justified for the first time in a dynamic MANET environment. Examination of the literature reveals that in addition to losses caused by route breakages, the hidden terminal effect contributes significantly to non-congestion induced packet losses in MANETs, which in turn has noticeably negative impact on TCP goodput. By adapting the conservative slow start mechanism of TCP Vegas into a form suitable for reactive TCP agents, like Reno, NewReno and SACK, the second part of the dissertation proposes a new Reno-based congestion avoidance mechanism which increases TCP goodput considerably across long paths by mitigating the negative effects of hidden terminals and alleviating some of the ambiguity of non-congestion related packet loss in MANETs. The proposed changes maintain intact the end-to-end semantics of TCP and are solely applicable to the sender. The new mechanism is further contrasted with an existing transport layer-focused solution and is shown to perform significantly better in a range of dynamic scenarios. As solution from an end-to-end perspective may be applicable to either or both communicating ends, the idea of implementing receiver-side alterations is also explored. Previous work has been primarily concerned with reducing receiver-generated cumulative ACK responses by “bundling” them into as few packets as possible thereby reducing misinterpretations of packet loss due to hidden terminals. However, a thorough evaluation of such receiver-side solutions reveals limitations in common evaluation practices and the solutions themselves. In an effort to address this shortcoming, the third part of this research work first specifies a tighter problem domain, identifying the circumstances under which the problem may be tackled by an end-to-end solution. Subsequent original analysis reveals that by taking into account optimisations possible in wireless communications, namely the partial or complete omission of the RTS/CTS handshake, noticeable improvements in TCP goodput are achievable especially over long paths. This novel modification is activated in a variety of topologies and is assessed using new metrics to more accurately gauge its effectiveness in a wireless multihop environment

    Evaluating and Characterizing the Performance of 802.11 Networks

    Get PDF
    The 802.11 standard has become the dominant protocol for Wireless Local Area Networks (WLANs). As an indication of its current and growing popularity, it is estimated that over 20 billion WiFi chipsets will be shipped between 2016 and 2021. In a span of less than 20 years, the speed of these networks has increased from 11 Mbps to several Gbps. The ever-increasing demand for more bandwidth required by applications such as large downloads, 4K video streaming, and virtual reality applications, along with the problems caused by interfering WiFi and non-WiFi devices operating on a shared spectrum has made the evaluation, understanding, and optimization of the performance of 802.11 networks an important research topic. In 802.11 networks, highly variable channel conditions make conducting valid, repeatable, and realistic experiments extremely challenging. Highly variable channel conditions, although representative of what devices actually experience, are often avoided in order to conduct repeatable experiments. In this thesis, we study existing methodologies for the empirical evaluation of 802.11 networks. We show that commonly used methodologies, such as running experiments multiple times and reporting the average along with the confidence interval, can produce misleading results in some environments. We propose and evaluate a new empirical evaluation methodology that expands the environments in which repeatable evaluations can be conducted for the purpose of comparing competing alternatives. Even with our new methodology, in environments with highly variable channel conditions, distinguishing statistically significant differences can be very difficult because variations in channel conditions lead to large confidence intervals. Moreover, running many experiments is usually very time consuming. Therefore, we propose and evaluate a trace-based approach that combines the realism of experiments with the repeatability of simulators. A key to our approach is that we capture data related to properties of the channel that impact throughput. These traces can be collected under conditions representative of those in which devices are likely to be used and then used to evaluate different algorithms or systems, resulting in fair comparisons because the alternatives are exposed to identical channel conditions. Finally, we characterize the relationships between the numerous transmission rates in 802.11n networks with the purpose of reducing the complexities caused by the large number of transmission rates when finding the optimal combination of physical-layer features. We find that there are strong relationships between most of the transmission rates over extended periods of time even in environments that involve mobility and experience interference. This work demonstrates that there are significant opportunities for utilizing relationships between rate configurations in designing algorithms that must choose the best combination of physical-layer features to use from a very large space of possibilities
    corecore