115 research outputs found
Recommended from our members
Improving Resilience of Communication in Information Dissemination for Time-Critical Applications
Severe weather impacts life and in this dire condition, people rely on communication, to organize relief and stay in touch with their loved ones. In such situations, cellular network infrastructure\footnote{We refer to cellular network infrastructure as infrastructure for the entirety of this document} might be affected due to power outage, link failures, etc. This urges us to look at Ad-hoc mode of communication, to offload major traffic partially or fully from the infrastructure, depending on the status of it.
We look into threefold approach, ranging from the case where the infrastructure is completely unavailable, to where it has been replaced by make shift low capacity mobile cellular base station.
First, we look into communication without infrastructure and timely, dissemination of weather alerts specific to geographical areas. We look into the specific case of floods as they affect significant number of people. Due to the nature of the problem we can utilize the properties of Information Centric Networking (ICN) in this context, namely: i) Flexibility and high failure resistance: Any node in the network that has the information can satisfy the query ii) Robust: Only sensor and car need to communicate iii) Fine grained geo-location specific information dissemination. We analyze how message forwarding using ICN on top of Ad hoc network, approach compares to the one based on infrastructure, that is less resilient in the case of disaster. In addition, we compare the performance of different message forwarding strategies in VANETs (Vehicular Adhoc Networks) using ICN. Our results show that ICN strategy outperforms the infrastructure-based approach as it is 100 times faster for 63\% of total messages delivered.
Then we look into the case where we have the cellular network infrastructure, but it is being pressured due to rapid increase in volume of network traffic (as seen during a major event) or it has been replaced by low capacity mobile tower. In this case we look at offloading as much traffic as possible from the infrastructure to device-to-device communication. However, the host-oriented model of the TCP/IP-based Internet poses challenges to this communication pattern. A scheme that uses an ICN model to fetch content from nearby peers, increases the resiliency of the network in cases of outages and disasters. We collected content popularity statistics from social media to create a content request pattern and evaluate our approach through the simulation of realistic urban scenarios. Additionally, we analyze the scenario of large crowds in sports venues. Our simulation results show that we can offload traffic from the backhaul network by up to 51.7\%, suggesting an advantageous path to support the surge in traffic while keeping complexity and cost for the network operator at manageable levels.
Finally, we look at adaptive bit-rate streaming (ABR) streaming, which has contributed significantly to the reduction of video playout stalling, mainly in highly variable bandwidth conditions. ABR clients continue to suffer from the variation of bit rate qualities over the duration of a streaming session. Similar to stalling, these variations in bit rate quality have a negative impact on the users’ Quality of Experience (QoE). We use a trace from a large-scale CDN to show that such quality changes occur in a significant amount of streaming sessions and investigate an ABR video segment retransmission approach to reduce the number of such quality changes. As the new HTTP/2 standard is becoming increasingly popular, we also see an increase in the usage of HTTP/2 as an alternative protocol for the transmission of web traffic including video streaming. Using various network conditions, we conduct a systematic comparison of existing transport layer approaches for HTTP/2 that is best suited for ABR segment retransmissions. Since it is well known that both protocols provide a series of improvements over HTTP/1.1, we perform experiments both in controlled environments and over transcontinental links in the Internet and find that these benefits also “trickle up” into the application layer when it comes to ABR video streaming where HTTP/2 retransmissions can significantly improve the average quality bitrate while simultaneously minimizing bit rate variations over the duration of a streaming session. Taking inspiration from the first two approaches, we take into account the resiliency of a multi-path approach and further look at a multi-path and multi-stream approach to ABR streaming and demonstrate that losses on one path have very little impact on the other from the same multi-path connection and this increases throughput and resiliency of communication
Traffic-Adaptive and Link-Quality-Aware Communication in Wireless Sensor Networks
This paper is a summary of the main contributions of the PhD thesis published in [1]. The main research contributions of the thesis are driven by the research question how to design simple, yet efficient and robust run-time adaptive resource allocation schemes within the communication stack of Wireless Sensor Network (WSN) nodes. The thesis addresses several problem domains with contributions on different layers of the WSN communication stack. The main contributions can be summarized as follows: First, a a novel run-time adaptive MAC protocol is introduced, which stepwise allocates the power-hungry radio interface in an on-demand manner when the encountered traffic load requires it. Second, the thesis outlines a methodology for robust, reliable and accurate software-based energy-estimation, which is calculated at network runtime on the sensor node itself. Third, the thesis evaluates several Forward Error Correction (FEC) strategies to adaptively allocate the correctional power of Error Correcting Codes (ECCs) to cope with timely and spatially variable bit error rates. Fourth, in the context of TCP-based communications in WSNs, the thesis evaluates distributed caching and local retransmission strategies to overcome the performance degrading effects of packet corruption and transmission failures when transmitting data over multiple hops. The performance of all developed protocols are evaluated on a self-developed real-world WSN testbed and achieve superior performance over selected existing approaches, especially where traffic load and channel conditions are suspect to rapid variations over tim
On Maximizing the Efficiency of Multipurpose WSNs Through Avoidance of Over- or Under-Provisioning of Information
A wireless sensor network (WSN) is a distributed collection of sensor nodes, which are resource constrained and capable of operating with minimal user attendance. The core function of a WSN is to sample physical phenomena and their environment and transport the information of interest, such as current status or events, as required by the application. Furthermore, the operating conditions and/or user requirements of WSNs are often desired to be evolvable, either driven by changes of the monitored phenomena or by the properties of the WSN itself. Consequently, a key objective for setting up/configuring WSNs is to provide the desired information subject to user defined quality requirements (accuracy, reliability, timeliness etc.), while considering their evolvability at the same time.
The current state of the art only addresses the functional blocks of sampling and information transport in isolation. The approaches indeed assume the respective other block to be perfect in maintaining the highest possible information contribution. In addition, some of the approaches just concentrate on a few information attributes such as accuracy and ignore other attributes (e.g., reliability, timeliness, etc.). The existing research targeting these blocks usually tries to enhance the information quality requirements (accuracy, reliability, timeliness etc.), regardless of user requirements and use more resources, leading to faster energy depletion. However, we argue that it is not always necessary to provide the highest possible information quality. In fact, it is essential to avoid under or over provision of information in order to save valuable resources such as energy while just satisfying user evolvable requirements. More precisely, we show the interdependence of the different user requirements and how to co-design them in order to tune the level of provisioning.
To discern the fundamental issues dictating the tunable co-design in WSNs, this thesis models and co-designs the sampling accuracy, information transport reliability and timeliness, and compares existing techniques. We highlight the key problems of existing techniques and provide solutions to achieve desired application requirements without under or over provisioning of information.
Our first research direction is to provide tunable information transport. We show that it is possible to drastically improve efficiency, while satisfying the user evolvable requirements on reliability and timeliness. In this regard, we provide a novel timeliness model and show the tradeoff between the reliability and timeliness. In addition, we show that the reliability and timeliness can work in composition for maximizing efficiency in information transport. Second, we consider the sampling and information transport co-design by just considering the attributes spatial accuracy and transport reliability. We provide a mathematical model in this regard and then show the optimization of sampling and information transport co-design. The approach is based on optimally choosing the number of samples in order to minimize the number of retransmission in the information transport while maintaining the required reliability. Third, we consider representing the physical phenomena accurately and optimize the network performance. Therefore, we jointly model accuracy, reliability and timeliness, and then derive the optimal combination of sampling and information transport. We provide an optimized model to choose the right representative sensor nodes to describe the phenomena and highlight the tunable co-design of sampling and information transport by avoiding over or under provision of information.
Our simulation and experimental results show that the proposed tunable co-design supports evolving user requirements, copes with dynamic network properties and outperforms the state of the art solutions
Adaptive epidemic dissemination as a finite-horizon optimal stopping problem
Wireless ad hoc networks are characterized by their limited capabilities and their routine deployment in unfavorable environments. This creates the strong requirement to regulate energy expenditure. We present a scheme to regulate energy cost through optimized transmission scheduling in a noisy epidemic dissemination environment. Building on the intrinsically cross-layer nature of the adaptive epidemic dissemination process, we strive to deliver an optimized mechanism, where energy cost is regulated without compromising the network infection. Improvement of data freshness and applicability in routing are also investigated. Extensive simulations are used to support our proposal
CoAP Infrastructure for IoT
The Internet of Things (IoT) can be seen as a large-scale network of billions of smart devices. Often IoT
devices exchange data in small but numerous messages, which requires IoT services to be more scalable and
reliable than ever. Traditional protocols that are known in the Web world does not fit well in the constrained
environment that these devices operate in. Therefore many lightweight protocols specialized for the IoT have
been studied, among which the Constrained Application Protocol (CoAP) stands out for its well-known REST
paradigm and easy integration with existing Web. On the other hand, new paradigms such as Fog Computing
emerges, attempting to avoid the centralized bottleneck in IoT services by moving computations to the edge
of the network. Since a node of the Fog essentially belongs to relatively constrained environment, CoAP fits
in well. Among the many attempts of building scalable and reliable systems, Erlang as a typical concurrency-oriented programming (COP) language has been battle tested in the telecom industry, which has similar requirements
as the IoT. In order to explore the possibility of applying Erlang and COP in general to the IoT, this thesis
presents an Erlang based CoAP server/client prototype ecoap with a flexible concurrency model that can
scale up to an unconstrained environment like the Cloud and scale down to a constrained environment like
an embedded platform. The flexibility of the presented server renders the same architecture applicable from
Fog to Cloud. To evaluate its performance, the proposed server is compared with the mainstream CoAP
implementation on an Amazon Web Service (AWS) Cloud instance and a Raspberry Pi 3, representing the
unconstrained and constrained environment respectively. The ecoap server achieves comparable throughput,
lower latency, and in general scales better than the other implementation in the Cloud and on the Raspberry
Pi. The thesis yields positive results and demonstrates the value of the philosophy of Erlang in the IoT space
Many-to-many data aggregation scheduling in wireless sensor networks with two sinks
Traditionally, wireless sensor networks (WSNs) have been deployed with a single sink. Due to the emergence of sophisticated applications, WSNs may require more than one sink. Moreover, deploying more than one sink may prolong the network lifetime and address fault tolerance issues. Several protocols have been proposed for WSNs with multiple sinks. However, most of them are routing protocols. Differently, our main contribution, in this paper, is the development of a distributed data aggregation scheduling (DAS) algorithm for WSNs with two sinks. We also propose a distributed energy-balancing algorithm to balance the energy consumption for the aggregators. The energy-balancing algorithm first forms trees rooted at nodes which are termed virtual sinks and then balances the number of children at a given level to level the energy consumption. Subsequently, the DAS algorithm takes the resulting balanced tree and assigns contiguous slots to sibling nodes, to avoid unnecessary energy waste due to frequent active-sleep transitions. We prove a number of theoretical results and the correctness of the algorithms. Through simulation and testbed experiments, we show the correctness and performance of our algorithms
Improving the Performance of Wireless LANs
This book quantifies the key factors of WLAN performance and describes methods for improvement. It provides theoretical background and empirical results for the optimum planning and deployment of indoor WLAN systems, explaining the fundamentals while supplying guidelines for design, modeling, and performance evaluation. It discusses environmental effects on WLAN systems, protocol redesign for routing and MAC, and traffic distribution; examines emerging and future network technologies; and includes radio propagation and site measurements, simulations for various network design scenarios, numerous illustrations, practical examples, and learning aids
Highly intensive data dissemination in complex networks
This paper presents a study on data dissemination in unstructured
Peer-to-Peer (P2P) network overlays. The absence of a structure in unstructured
overlays eases the network management, at the cost of non-optimal mechanisms to
spread messages in the network. Thus, dissemination schemes must be employed
that allow covering a large portion of the network with a high probability
(e.g.~gossip based approaches). We identify principal metrics, provide a
theoretical model and perform the assessment evaluation using a high
performance simulator that is based on a parallel and distributed architecture.
A main point of this study is that our simulation model considers
implementation technical details, such as the use of caching and Time To Live
(TTL) in message dissemination, that are usually neglected in simulations, due
to the additional overhead they cause. Outcomes confirm that these technical
details have an important influence on the performance of dissemination schemes
and that the studied schemes are quite effective to spread information in P2P
overlay networks, whatever their topology. Moreover, the practical usage of
such dissemination mechanisms requires a fine tuning of many parameters, the
choice between different network topologies and the assessment of behaviors
such as free riding. All this can be done only using efficient simulation tools
to support both the network design phase and, in some cases, at runtime
- …