442 research outputs found

    Markov Decision Processes with Applications in Wireless Sensor Networks: A Survey

    Full text link
    Wireless sensor networks (WSNs) consist of autonomous and resource-limited devices. The devices cooperate to monitor one or more physical phenomena within an area of interest. WSNs operate as stochastic systems because of randomness in the monitored environments. For long service time and low maintenance cost, WSNs require adaptive and robust methods to address data exchange, topology formulation, resource and power optimization, sensing coverage and object detection, and security challenges. In these problems, sensor nodes are to make optimized decisions from a set of accessible strategies to achieve design goals. This survey reviews numerous applications of the Markov decision process (MDP) framework, a powerful decision-making tool to develop adaptive algorithms and protocols for WSNs. Furthermore, various solution methods are discussed and compared to serve as a guide for using MDPs in WSNs

    TCP performance enhancement in wireless networks via adaptive congestion control and active queue management

    Get PDF
    The transmission control protocol (TCP) exhibits poor performance when used in error-prone wireless networks. Remedy to this problem has been an active research area. However, a widely accepted and adopted solution is yet to emerge. Difficulties of an acceptable solution lie in the areas of compatibility, scalability, computational complexity and the involvement of intermediate routers and switches. This dissertation rexriews the current start-of-the-art solutions to TCP performance enhancement, and pursues an end-to-end solution framework to the problem. The most noticeable cause of the performance degradation of TCP in wireless networks is the higher packet loss rate as compared to that in traditional wired networks. Packet loss type differentiation has been the focus of many proposed TCP performance enhancement schemes. Studies conduced by this dissertation research suggest that besides the standard TCP\u27s inability of discriminating congestion packet losses from losses related to wireless link errors, the standard TCP\u27s additive increase and multiplicative decrease (AIMD) congestion control algorithm itself needs to be redesigned to achieve better performance in wireless, and particularly, high-speed wireless networks. This dissertation proposes a simple, efficient, and effective end-to-end solution framework that enhances TCP\u27s performance through techniques of adaptive congestion control and active queue management. By end-to-end, it means a solution with no requirement of routers being wireless-aware or wireless-specific . TCP-Jersey has been introduced as an implementation of the proposed solution framework, and its performance metrics have been evaluated through extensive simulations. TCP-Jersey consists of an adaptive congestion control algorithm at the source by means of the source\u27s achievable rate estimation (ARE) —an adaptive filter of packet inter-arrival times, a congestion indication algorithm at the links (i.e., AQM) by means of packet marking, and a effective loss differentiation algorithm at the source by careful examination of the congestion marks carried by the duplicate acknowledgment packets (DUPACK). Several improvements to the proposed TCP-Jersey have been investigated, including a more robust ARE algorithm, a less computationally intensive threshold marking algorithm as the AQM link algorithm, a more stable congestion indication function based on virtual capacity at the link, and performance results have been presented and analyzed via extensive simulations of various network configurations. Stability analysis of the proposed ARE-based additive increase and adaptive decrease (AJAD) congestion control algorithm has been conducted and the analytical results have been verified by simulations. Performance of TCP-Jersey has been compared to that of a perfect , but not practical, TCP scheme, and encouraging results have been observed. Finally the framework of the TCP-Jersey\u27s source algorithm has been extended and generalized for rate-based congestion control, as opposed to TCP\u27s window-based congestion control, to provide a design platform for applications, such as real-time multimedia, that do not use TCP as transport protocol yet do need to control network congestion as well as combat packet losses in wireless networks. In conclusion, the framework architecture presented in this dissertation that combines the adaptive congestion control and active queue management in solving the TCP performance degradation problem in wireless networks has been shown as a promising answer to the problem due to its simplistic design philosophy complete compatibility with the current TCP/IP and AQM practice, end-to-end architecture for scalability, and the high effectiveness and low computational overhead. The proposed implementation of the solution framework, namely TCP-Jersey is a modification of the standard TCP protocol rather than a completely new design of the transport protocol. It is an end-to-end approach to address the performance degradation problem since it does not require split mode connection establishment and maintenance using special wireless-aware software agents at the routers. The proposed solution also differs from other solutions that rely on the link layer error notifications for packet loss differentiation. The proposed solution is also unique among other proposed end-to-end solutions in that it differentiates packet losses attributed to wireless link errors from congestion induced packet losses directly from the explicit congestion indication marks in the DUPACK packets, rather than inferring the loss type based on packet delay or delay jitter as in many other proposed solutions; nor by undergoing a computationally expensive off-line training of a classification model (e.g., HMM), or a Bayesian estimation/detection process that requires estimations of a priori loss probability distributions of different loss types. The proposed solution is also scalable and fully compatible to the current practice in Internet congestion control and queue management, but with an additional function of loss type differentiation that effectively enhances TCP\u27s performance over error-prone wireless networks. Limitations of the proposed solution architecture and areas for future researches are also addressed

    Activity-Aware Sensor Networks for Smart Environments

    Get PDF
    The efficient designs of Wireless Sensor Network protocols and intelligent Machine Learning algorithms, together have led to the advancements of various systems and applications for Smart Environments. By definition, Smart Environments are the typical physical worlds used in human daily life, those are seamlessly embedded with smart tiny devices equipped with sensors, actuators and computational elements. Since human user is a key component in Smart Environments, human motion activity patterns have key importance in building sensor network systems and applications for Smart Environments. Motivated by this, in this thesis my work is focused on human motion activity-aware sensor networks for Smart Environments. The main contributions of this thesis are in two important aspects: (i) Designing event activity context-aware sensor networks for efficient performance optimization as well as resource usage; and (ii) Using binary motion sensing sensor networks\u27 collective data for device-free real-time tracking of multiple users. Firstly, I describe the design of our proposed event activity context-aware sensor network protocols and system design for Smart Environments. The main motivation behind this work is as follows. A sensor network, unlike a traditional communication network, provides high degree of visibility into the environmental physical processes. Therefore its operation is driven by the activities in the environment. In long-term operations, these activities usually show certain patterns which can be learned and effectively utilized to optimize network design. In this thesis I have designed several novel protocols: (i) ActSee for activity-aware radio duty-cycling, (ii) EAR for activity-aware and energy balanced routing, and (iii) ActiSen complete working system with protocol suites for activity-aware sensing/ duty-cycling/ routing. Secondly, I have proposed and designed FindingHuMo (Finding Human Motion), a Machine Learning based real-time user tracking algorithm for Smart Environments using Sensor Networks. This work has been motivated by increasing adoption of sensor network enabled Ubiquitous Computing in key Smart Environment applications, like Smart Healthcare. Our proposed FindingHuMo protocol and system can perform device-free tracking of multiple (unknown and variable number of) users in the hallway environments, just from non-invasive and anonymous binary motion sensor data

    Stochastic Optimization and Machine Learning Modeling for Wireless Networking

    Get PDF
    In the last years, the telecommunications industry has seen an increasing interest in the development of advanced solutions that enable communicating nodes to exchange large amounts of data. Indeed, well-known applications such as VoIP, audio streaming, video on demand, real-time surveillance systems, safety vehicular requirements, and remote computing have increased the demand for the efficient generation, utilization, management and communication of larger and larger data quantities. New transmission technologies have been developed to permit more efficient and faster data exchanges, including multiple input multiple output architectures or software defined networking: as an example, the next generation of mobile communication, known as 5G, is expected to provide data rates of tens of megabits per second for tens of thousands of users and only 1 ms latency. In order to achieve such demanding performance, these systems need to effectively model the considerable level of uncertainty related to fading transmission channels, interference, or the presence of noise in the data. In this thesis, we will present how different approaches can be adopted to model these kinds of scenarios, focusing on wireless networking applications. In particular, the first part of this work will show how stochastic optimization models can be exploited to design energy management policies for wireless sensor networks. Traditionally, transmission policies are designed to reduce the total amount of energy drawn from the batteries of the devices; here, we consider energy harvesting wireless sensor networks, in which each device is able to scavenge energy from the environment and charge its battery with it. In this case, the goal of the optimal transmission policies is to efficiently manage the energy harvested from the environment, avoiding both energy outage (i.e., no residual energy in a battery) and energy overflow (i.e., the impossibility to store scavenged energy when the battery is already full). In the second part of this work, we will explore the adoption of machine learning techniques to tackle a number of common wireless networking problems. These algorithms are able to learn from and make predictions on data, avoiding the need to follow limited static program instructions: models are built from sample inputs, thus allowing for data-driven predictions and decisions. In particular, we will first design an on-the-fly prediction algorithm for the expected time of arrival related to WiFi transmissions. This predictor only exploits those network parameters available at each receiving node and does not require additional knowledge from the transmitter, hence it can be deployed without modifying existing standard transmission protocols. Secondly, we will investigate the usage of particular neural network instances known as autoencoders for the compression of biosignals, such as electrocardiography and photo plethysmographic sequences. A lightweight lossy compressor will be designed, able to be deployed in wearable battery-equipped devices with limited computational power. Thirdly, we will propose a predictor for the long-term channel gain in a wireless network. Differently from other works in the literature, such predictor will only exploit past channel samples, without resorting to additional information such as GPS data. An accurate estimation of this gain would enable to, e.g., efficiently allocate resources and foretell future handover procedures. Finally, although not strictly related to wireless networking scenarios, we will show how deep learning techniques can be applied to the field of autonomous driving. This final section will deal with state-of-the-art machine learning solutions, proving how these techniques are able to considerably overcome the performance given by traditional approaches

    Channel Access in Wireless Networks: Protocol Design of Energy-Aware Schemes for the IoT and Analysis of Existing Technologies

    Get PDF
    The design of channel access policies has been an object of study since the deployment of the first wireless networks, as the Medium Access Control (MAC) layer is responsible for coordinating transmissions to a shared channel and plays a key role in the network performance. While the original target was the system throughput, over the years the focus switched to communication latency, Quality of Service (QoS) guarantees, energy consumption, spectrum efficiency, and any combination of such goals. The basic mechanisms to use a shared channel, such as ALOHA, TDMA- and FDMA-based policies, have been introduced decades ago. Nonetheless, the continuous evolution of wireless networks and the emergence of new communication paradigms demand the development of new strategies to adapt and optimize the standard approaches so as to satisfy the requirements of applications and devices. This thesis proposes several channel access schemes for novel wireless technologies, in particular Internet of Things (IoT) networks, the Long-Term Evolution (LTE) cellular standard, and mmWave communication with the IEEE802.11ad standard. The first part of the thesis concerns energy-aware channel access policies for IoT networks, which typically include several battery-powered sensors. In scenarios with energy restrictions, traditional protocols that do not consider the energy consumption may lead to the premature death of the network and unreliable performance expectations. The proposed schemes show the importance of accurately characterizing all the sources of energy consumption (and inflow, in the case of energy harvesting), which need to be included in the protocol design. In particular, the schemes presented in this thesis exploit data processing and compression techniques to trade off QoS for lifetime. We investigate contention-free and contention-based chanel access policies for different scenarios and application requirements. While the energy-aware schemes proposed for IoT networks are based on a clean-slate approach that is agnostic of the communication technology used, the second part of the thesis is focused on the LTE and IEEE802.11ad standards. As regards LTE, the study proposed in this thesis shows how to use machine-learning techniques to infer the collision multiplicity in the channel access phase, information that can be used to understand when the network is congested and improve the contention resolution mechanism. This is especially useful for massive access scenarios; in the last years, in fact, the research community has been investigating on the use of LTE for Machine-Type Communication (MTC). As regards the standard IEEE802.11ad, instead, it provides a hybrid MAC layer with contention-based and contention-free scheduled allocations, and a dynamic channel time allocation mechanism built on top of such schedule. Although this hybrid scheme is expected to meet heterogeneous requirements, it is still not clear how to develop a schedule based on the various traffic flows and their demands. A mathematical model is necessary to understand the performance and limits of the possible types of allocations and guide the scheduling process. In this thesis, we propose a model for the contention-based access periods which is aware of the interleaving of the available channel time with contention-free allocations

    Stochastic Event-Based Control and Estimation

    Get PDF
    Digital controllers are traditionally implemented using periodic sampling, computation, and actuation events. As more control systems are implemented to share limited network and CPU bandwidth with other tasks, it is becoming increasingly attractive to use some form of event-based control instead, where precious events are used only when needed. Forms of event-based control have been used in practice for a very long time, but mostly in an ad-hoc way. Though optimal solutions to most event-based control problems are unknown, it should still be viable to compare performance between suggested approaches in a reasonable manner. This thesis investigates an event-based variation on the stochastic linear-quadratic (LQ) control problem, with a fixed cost per control event. The sporadic constraint of an enforced minimum inter-event time is introduced, yielding a mixed continuous-/discrete-time formulation. The quantitative trade-off between event rate and control performance is compared between periodic and sporadic control. Example problems for first-order plants are investigated, for a single control loop and for multiple loops closed over a shared medium. Path constraints are introduced to model and analyze higher-order event-based control systems. This component-based approach to stochastic hybrid systems allows to express continuous- and discrete-time dynamics, state and switching constraints, control laws, and stochastic disturbances in the same model. Sum-of-squares techniques are then used to find bounds on control objectives using convex semidefinite programming. The thesis also considers state estimation for discrete time linear stochastic systems from measurements with convex set uncertainty. The Bayesian observer is considered given log-concave process disturbances and measurement likelihoods. Strong log-concavity is introduced, and it is shown that the observer preserves log-concavity, and propagates strong log-concavity like inverse covariance in a Kalman filter. A recursive state estimator is developed for systems with both stochastic and set-bounded process and measurement noise terms. A time-varying linear filter gain is optimized using convex semidefinite programming and ellipsoidal over-approximation, given a relative weight on the two kinds of error

    On feedback-based rateless codes for data collection in vehicular networks

    Full text link
    The ability to transfer data reliably and with low delay over an unreliable service is intrinsic to a number of emerging technologies, including digital video broadcasting, over-the-air software updates, public/private cloud storage, and, recently, wireless vehicular networks. In particular, modern vehicles incorporate tens of sensors to provide vital sensor information to electronic control units (ECUs). In the current architecture, vehicle sensors are connected to ECUs via physical wires, which increase the cost, weight and maintenance effort of the car, especially as the number of electronic components keeps increasing. To mitigate the issues with physical wires, wireless sensor networks (WSN) have been contemplated for replacing the current wires with wireless links, making modern cars cheaper, lighter, and more efficient. However, the ability to reliably communicate with the ECUs is complicated by the dynamic channel properties that the car experiences as it travels through areas with different radio interference patterns, such as urban versus highway driving, or even different road quality, which may physically perturb the wireless sensors. This thesis develops a suite of reliable and efficient communication schemes built upon feedback-based rateless codes, and with a target application of vehicular networks. In particular, we first investigate the feasibility of multi-hop networking for intra-car WSN, and illustrate the potential gains of using the Collection Tree Protocol (CTP), the current state of the art in multi-hop data aggregation. Our results demonstrate, for example, that the packet delivery rate of a node using a single-hop topology protocol can be below 80% in practical scenarios, whereas CTP improves reliability performance beyond 95% across all nodes while simultaneously reducing radio energy consumption. Next, in order to migrate from a wired intra-car network to a wireless system, we consider an intermediate step to deploy a hybrid communication structure, wherein wired and wireless networks coexist. Towards this goal, we design a hybrid link scheduling algorithm that guarantees reliability and robustness under harsh vehicular environments. We further enhance the hybrid link scheduler with the rateless codes such that information leakage to an eavesdropper is almost zero for finite block lengths. In addition to reliability, one key requirement for coded communication schemes is to achieve a fast decoding rate. This feature is vital in a wide spectrum of communication systems, including multimedia and streaming applications (possibly inside vehicles) with real-time playback requirements, and delay-sensitive services, where the receiver needs to recover some data symbols before the recovery of entire frame. To address this issue, we develop feedback-based rateless codes with dynamically-adjusted nonuniform symbol selection distributions. Our simulation results, backed by analysis, show that feedback information paired with a nonuniform distribution significantly improves the decoding rate compared with the state of the art algorithms. We further demonstrate that amount of feedback sent can be tuned to the specific transmission properties of a given feedback channel

    State Management for Efficient Event Pattern Detection

    Get PDF
    Event Stream Processing (ESP) Systeme überwachen kontinuierliche Datenströme, um benutzerdefinierte Queries auszuwerten. Die Herausforderung besteht darin, dass die Queryverarbeitung zustandsbehaftet ist und die Anzahl von Teilübereinstimmungen mit der Größe der verarbeiteten Events exponentiell anwächst. Die Dynamik von Streams und die Notwendigkeit, entfernte Daten zu integrieren, erschweren die Zustandsverwaltung. Erstens liefern heterogene Eventquellen Streams mit unvorhersehbaren Eingaberaten und Queryselektivitäten. Während Spitzenzeiten ist eine erschöpfende Verarbeitung unmöglich, und die Systeme müssen auf eine Best-Effort-Verarbeitung zurückgreifen. Zweitens erfordern Queries möglicherweise externe Daten, um ein bestimmtes Event für eine Query auszuwählen. Solche Abhängigkeiten sind problematisch: Das Abrufen der Daten unterbricht die Stream-Verarbeitung. Ohne eine Eventauswahl auf Grundlage externer Daten wird das Wachstum von Teilübereinstimmungen verstärkt. In dieser Dissertation stelle ich Strategien für optimiertes Zustandsmanagement von ESP Systemen vor. Zuerst ermögliche ich eine Best-Effort-Verarbeitung mittels Load Shedding. Dabei werden sowohl Eingabeeevents als auch Teilübereinstimmungen systematisch verworfen, um eine Latenzschwelle mit minimalem Qualitätsverlust zu garantieren. Zweitens integriere ich externe Daten, indem ich das Abrufen dieser von der Verwendung in der Queryverarbeitung entkoppele. Mit einem effizienten Caching-Mechanismus vermeide ich Unterbrechungen durch Übertragungslatenzen. Dazu werden externe Daten basierend auf ihrer erwarteten Verwendung vorab abgerufen und mittels Lazy Evaluation bei der Eventauswahl berücksichtigt. Dabei wird ein Kostenmodell verwendet, um zu bestimmen, wann welche externen Daten abgerufen und wie lange sie im Cache aufbewahrt werden sollen. Ich habe die Effektivität und Effizienz der vorgeschlagenen Strategien anhand von synthetischen und realen Daten ausgewertet und unter Beweis gestellt.Event stream processing systems continuously evaluate queries over event streams to detect user-specified patterns with low latency. However, the challenge is that query processing is stateful and it maintains partial matches that grow exponentially in the size of processed events. State management is complicated by the dynamicity of streams and the need to integrate remote data. First, heterogeneous event sources yield dynamic streams with unpredictable input rates, data distributions, and query selectivities. During peak times, exhaustive processing is unreasonable, and systems shall resort to best-effort processing. Second, queries may require remote data to select a specific event for a pattern. Such dependencies are problematic: Fetching the remote data interrupts the stream processing. Yet, without event selection based on remote data, the growth of partial matches is amplified. In this dissertation, I present strategies for optimised state management in event pattern detection. First, I enable best-effort processing with load shedding that discards both input events and partial matches. I carefully select the shedding elements to satisfy a latency bound while striving for a minimal loss in result quality. Second, to efficiently integrate remote data, I decouple the fetching of remote data from its use in query evaluation by a caching mechanism. To this end, I hide the transmission latency by prefetching remote data based on anticipated use and by lazy evaluation that postpones the event selection based on remote data to avoid interruptions. A cost model is used to determine when to fetch which remote data items and how long to keep them in the cache. I evaluated the above techniques with queries over synthetic and real-world data. I show that the load shedding technique significantly improves the recall of pattern detection over baseline approaches, while the technique for remote data integration significantly reduces the pattern detection latency
    • …
    corecore