35,794 research outputs found
Experimental Evaluation of Wireless Simulation Assumptions
All analytical and simulation research on ad hoc wireless networks must necessarily model radio propagation using simplifying assumptions. We provide a comprehensive review of six assumptions that are still part of many ad hoc network simulation studies, despite increasing awareness of the need to represent more realistic features, including hills, obstacles, link asymmetries, and unpredictable fading. We use an extensive set of measurements from a large outdoor routing experiment to demonstrate the weakness of these assumptions, and show how these assumptions cause simulation results to differ significantly from experimental results. We close with a series of recommendations for researchers, whether they develop protocols, analytic models, or simulators for ad hoc wireless networks
Characterization of multi-channel interference
Multi-channel communication protocols in wireless networks usually assume perfect orthogonality between wireless channels or consider only the use of interference-free channels. The first approach may overestimate the performance whereas the second approach may fail to utilize the spectrum efficiently. Therefore, a more realistic approach would be the careful use of interfering channels by controlling the interference at an acceptable level. We present a methodology to estimate the packet error rate (PER) due to inter-channel interference in a wireless network. The methodology experimentally characterizes the multi-channel interference and analytically estimates it based on the observations from the experiments. Furthermore, the analytical estimation is used in simulations to derive estimates of the capacity in larger networks. Simulation results show that the achievable network capacity, which is defined as the number of simultaneous transmissions, significantly increases with realistic interfering channels compared with the use of only orthogonal channels. When we consider the same number of channels, the achievable capacity with realistic interfering channels can be close to the capacity of idealistic orthogonal channels. This shows that overlapping channels which constitute a much smaller band, provides more efficient use of the spectrum. Finally, we explore the correctness of channel orthogonality and show why this assumption may fail in a practical setting
Formal analysis techniques for gossiping protocols
We give a survey of formal verification techniques that can be used to corroborate existing experimental results for gossiping protocols in a rigorous manner. We present properties of interest for gossiping protocols and discuss how various formal evaluation techniques can be employed to predict them
Agile Calibration Process of Full-Stack Simulation Frameworks for V2X Communications
Computer simulations and real-world car trials are essential to investigate
the performance of Vehicle-to-Everything (V2X) networks. However, simulations
are imperfect models of the physical reality and can be trusted only when they
indicate agreement with the real-world. On the other hand, trials lack
reproducibility and are subject to uncertainties and errors. In this paper, we
will illustrate a case study where the interrelationship between trials,
simulation, and the reality-of-interest is presented. Results are then compared
in a holistic fashion. Our study will describe the procedure followed to
macroscopically calibrate a full-stack network simulator to conduct
high-fidelity full-stack computer simulations.Comment: To appear in IEEE VNC 2017, Torino, I
The Meeting of Acquaintances: A Cost-efficient Authentication Scheme for Light-weight Objects with Transient Trust Level and Plurality Approach
Wireless sensor networks consist of a large number of distributed sensor
nodes so that potential risks are becoming more and more unpredictable. The new
entrants pose the potential risks when they move into the secure zone. To build
a door wall that provides safe and secured for the system, many recent research
works applied the initial authentication process. However, the majority of the
previous articles only focused on the Central Authority (CA) since this leads
to an increase in the computation cost and energy consumption for the specific
cases on the Internet of Things (IoT). Hence, in this article, we will lessen
the importance of these third parties through proposing an enhanced
authentication mechanism that includes key management and evaluation based on
the past interactions to assist the objects joining a secured area without any
nearby CA. We refer to a mobility dataset from CRAWDAD collected at the
University Politehnica of Bucharest and rebuild into a new random dataset
larger than the old one. The new one is an input for a simulated authenticating
algorithm to observe the communication cost and resource usage of devices. Our
proposal helps the authenticating flexible, being strict with unknown devices
into the secured zone. The threshold of maximum friends can modify based on the
optimization of the symmetric-key algorithm to diminish communication costs
(our experimental results compare to previous schemes less than 2000 bits) and
raise flexibility in resource-constrained environments.Comment: 27 page
Is Our Model for Contention Resolution Wrong?
Randomized binary exponential backoff (BEB) is a popular algorithm for
coordinating access to a shared channel. With an operational history exceeding
four decades, BEB is currently an important component of several wireless
standards. Despite this track record, prior theoretical results indicate that
under bursty traffic (1) BEB yields poor makespan and (2) superior algorithms
are possible. To date, the degree to which these findings manifest in practice
has not been resolved.
To address this issue, we examine one of the strongest cases against BEB:
packets that simultaneously begin contending for the wireless channel. Using
Network Simulator 3, we compare against more recent algorithms that are
inspired by BEB, but whose makespan guarantees are superior. Surprisingly, we
discover that these newer algorithms significantly underperform. Through
further investigation, we identify as the culprit a flawed but common
abstraction regarding the cost of collisions. Our experimental results are
complemented by analytical arguments that the number of collisions -- and not
solely makespan -- is an important metric to optimize. We believe that these
findings have implications for the design of contention-resolution algorithms.Comment: Accepted to the 29th ACM Symposium on Parallelism in Algorithms and
Architectures (SPAA 2017
- ā¦