3 research outputs found
NeuRA: Using Neural Networks to Improve WiFi Rate Adaptation
Although a variety of rate adaptation algorithms have been proposed for 802.11 devices, sampling-based algorithms are preferred and used in practice because they only require frame loss information which is available on all devices. Unfortunately, sampling can impose significant overheads because it can lead to excessive frame loss or the choice of suboptimal rates. In this thesis, we design a novel neural network based rate adaptation algorithm, called NeuRA. NeuRA significantly improves the efficiency of sampling in rate adaptation algorithms by using a neural network model to predict the expected throughput of many rates, rather than sampling their throughput. Furthermore, we propose a feature selection technique to select the best set of rates to sample.
Despite decades of research on rate adaptation in 802.11 networks, there are no definitive results which determine which algorithm is the best or if any algorithm is close to optimal. We design an offline algorithm that uses information about the fate of future frames to make statistically optimal frame aggregation and rate adaptation decisions. This algorithm provides an upper bound on the throughput that can be obtained by practical online algorithms and enables us to evaluate rate adaptation algorithms with respect to this upper bound.
Our trace-based evaluations using a wide variety of real-world scenarios show that NeuRA outperforms the widely-used Minstrel HT algorithm by up to 24% (16% on average) and the Intel iwl-mvm-rs algorithm by up to 32% (13% on average). Moreover, the upper bound given by the offline optimal algorithm shows a throughput up to 58% (30% on average) higher than Minstrel HT and up to 31% (12% on average) higher than NeuRA. Hence, NeuRA reduces the gap in throughput between Minstrel HT and the offline optimal algorithm by half. Additionally, our results show that several-fold improvements over Minstrel HT shown in previous work are unlikely to be obtained in real-world scenarios. Finally, we implement NeuRA using the Linux ath9k driver to show that the neural network processing requirements are sufficiently low to be practical and that NeuRA can be used to obtain statistically significant improvements in throughput when compared with Minstrel HT
T-SIMn: Towards a Framework for the Trace-Based Simulation of 802.11n Networks
With billions of WiFi devices now in use, and growing, combined with the rising
popularity of high-bandwidth applications, such as streaming video, demands on
WiFi networks continue to rise. To increase performance for end users the
802.11n WiFi standard introduces several new features that increase Physical
Layer Data Rates (PLDRs). However, the rates are less robust (i.e., more prone
error). Optimizing throughput in an 802.11n network requires choosing the
combination of features that results in the greatest balance between PLDRs and
error rates, which is highly dependent on the environmental conditions. While
the faster PLDRs are an important factor in the throughput gains afforded by
802.11n, it is only when they are used in combination with the new MAC layer
features, namely Frame Aggregation (FA) and Block Acknowledgements (BAs), that
802.11n achieves significant gains when compared to the older 802.11g standard.
FA allows multiple frames to be combined into a large frame so that they can be
transmitted and acknowledged as one aggregated packet, which results in the
channel being used more efficiently.
Unfortunately, it is challenging to experimentally evaluate and compare the
performance of WiFi networks using different combinations of 802.11n features.
WiFi networks operate in 2.4 and 5 GHz bands, which are shared by WiFi devices,
included in computers, cell phones and tablets; as well as Bluetooth devices,
wireless keyboards/mice, cordless phones, microwave ovens and many others.
Competition for the shared medium can negatively impact throughput by increasing
transmission delays or error rates. This makes it difficult to perform
repeatable experiments that are representative of the conditions in which WiFi
devices are typically used. Therefore, we need new methodologies for
understanding and evaluating how to best use these new 802.11n features.
An existing trace-based simulation framework, called T-RATE, has been shown to
be an accurate alternative to experimentally evaluating throughput in 802.11g
networks. We propose T-SIMn, an extension of the T-RATE framework that includes
support for the newer 802.11n WiFi standard. In particular, we implement a new
802.11n network simulator, which we call SIMn. Furthermore, we develop a new
implementation of the trace collection phase that incorporates FA. We
demonstrate that SIMn accurately simulates throughput for one, two and
three-antenna PLDRs in 802.11n with FA. We also show that SIMn accurately
simulates delay due to WiFi and non-WiFi interference, as well as error due to
path loss in mobile scenarios. Finally, we evaluate the T-SIMn framework
(including trace collection) by collecting traces using an iPhone. The iPhone is
representative of a wide variety of one antenna devices. We find that our
framework can be used to accurately simulate these scenarios and we demonstrate
the fidelity of SIMn by uncovering problems with our initial evaluation
methodology. We expect that the T-SIMn framework will be suitable for easily and
fairly evaluating rate adaptation, frame aggregation and channel bandwidth
adaptation algorithms for 802.11n networks, which are challenging to evaluate
experimentally
Evaluating and Characterizing the Performance of 802.11 Networks
The 802.11 standard has become the dominant protocol for Wireless Local Area Networks (WLANs). As an indication of its current and growing popularity, it is estimated that over 20 billion WiFi chipsets will be shipped between 2016 and 2021. In a span of less than 20 years, the speed of these networks has increased from 11 Mbps to several Gbps. The ever-increasing demand for more bandwidth required by applications such as large downloads, 4K video streaming, and virtual reality applications, along with the problems caused by interfering WiFi and non-WiFi devices operating on a shared spectrum has made the evaluation, understanding, and optimization of the performance of 802.11 networks an important research topic.
In 802.11 networks, highly variable channel conditions make conducting valid, repeatable, and realistic experiments extremely challenging. Highly variable channel conditions, although representative of what devices actually experience, are often avoided in order to conduct repeatable experiments. In this thesis, we study existing methodologies for the empirical evaluation of 802.11 networks. We show that commonly used methodologies, such as running experiments multiple times and reporting the average along with the confidence interval, can produce misleading results in some environments.
We propose and evaluate a new empirical evaluation methodology that expands the environments in which repeatable evaluations can be conducted for the purpose of comparing competing alternatives. Even with our new methodology, in environments with highly variable channel conditions, distinguishing statistically significant differences can be very difficult because variations in channel conditions lead to large confidence intervals. Moreover, running many experiments is usually very time consuming. Therefore, we propose and evaluate a trace-based approach that combines the realism of experiments with the repeatability of simulators. A key to our approach is that we capture data related to properties of the channel that impact throughput. These traces can be collected under conditions representative of those in which devices are likely to be used and then used to evaluate different algorithms or systems, resulting in fair comparisons because the alternatives are exposed to identical channel conditions.
Finally, we characterize the relationships between the numerous transmission rates in 802.11n networks with the purpose of reducing the complexities caused by the large number of transmission rates when finding the optimal combination of physical-layer features. We find that there are strong relationships between most of the transmission rates over extended periods of time even in environments that involve mobility and experience interference. This work demonstrates that there are significant opportunities for utilizing relationships between rate configurations in designing algorithms that must choose the best combination of physical-layer features to use from a very large space of possibilities