7 research outputs found

    Machine Learning in Wireless Sensor Networks: Algorithms, Strategies, and Applications

    Get PDF
    Wireless sensor networks monitor dynamic environments that change rapidly over time. This dynamic behavior is either caused by external factors or initiated by the system designers themselves. To adapt to such conditions, sensor networks often adopt machine learning techniques to eliminate the need for unnecessary redesign. Machine learning also inspires many practical solutions that maximize resource utilization and prolong the lifespan of the network. In this paper, we present an extensive literature review over the period 2002-2013 of machine learning methods that were used to address common issues in wireless sensor networks (WSNs). The advantages and disadvantages of each proposed algorithm are evaluated against the corresponding problem. We also provide a comparative guide to aid WSN designers in developing suitable machine learning solutions for their specific application challenges.Comment: Accepted for publication in IEEE Communications Surveys and Tutorial

    Energy efficient and latency aware adaptive compression in wireless sensor networks

    Get PDF
    Wireless sensor networks are composed of a few to several thousand sensors deployed over an area or on specific objects to sense data and report that data back to a sink either directly or through a series of hops across other sensor nodes. There are many applications for wireless sensor networks including environment monitoring, wildlife tracking, security, structural heath monitoring, troop tracking, and many others. The sensors communicate wirelessly and are typically very small in size and powered by batteries. Wireless sensor networks are thus often constrained in bandwidth, processor speed, and power. Also, many wireless sensor network applications have a very low tolerance for latency and need to transmit the data in real time. Data compression is a useful tool for minimizing the bandwidth and power required to transmit data from the sensor nodes to the sink; however, compression algorithms often add a significant amount of latency or require a great deal of additional processing. The following papers define and analyze multiple approaches for achieving effective compression while reducing latency and power consumption far below what would be required to process and transmit the data uncompressed. The algorithms target many different types of sensor applications from lossless compression on a single sensor to error tolerant, collaborative compression across an entire network of sensors to compression of XML data on sensors. Extensive analysis over many different real-life data sets and comparison of several existing compression methods show significant contribution to efficient wireless sensor communication --Abstract, page iv

    Wireless Monitoring Systems for Long-Term Reliability Assessment of Bridge Structures based on Compressed Sensing and Data-Driven Interrogation Methods.

    Full text link
    The state of the nation’s highway bridges has garnered significant public attention due to large inventories of aging assets and insufficient funds for repair. Current management methods are based on visual inspections that have many known limitations including reliance on surface evidence of deterioration and subjectivity introduced by trained inspectors. To address the limitations of current inspection practice, structural health monitoring (SHM) systems can be used to provide quantitative measures of structural behavior and an objective basis for condition assessment. SHM systems are intended to be a cost effective monitoring technology that also automates the processing of data to characterize damage and provide decision information to asset managers. Unfortunately, this realization of SHM systems does not currently exist. In order for SHM to be realized as a decision support tool for bridge owners engaged in performance- and risk-based asset management, technological hurdles must still be overcome. This thesis focuses on advancing wireless SHM systems. An innovative wireless monitoring system was designed for permanent deployment on bridges in cold northern climates which pose an added challenge as the potential for solar harvesting is reduced and battery charging is slowed. First, efforts advancing energy efficient usage strategies for WSNs were made. With WSN energy consumption proportional to the amount of data transmitted, data reduction strategies are prioritized. A novel data compression paradigm termed compressed sensing is advanced for embedment in a wireless sensor microcontroller. In addition, fatigue monitoring algorithms are embedded for local data processing leading to dramatic data reductions. In the second part of the thesis, a radical top-down design strategy (in contrast to global vibration strategies) for a monitoring system is explored to target specific damage concerns of bridge owners. Data-driven algorithmic approaches are created for statistical performance characterization of long-term bridge response. Statistical process control and reliability index monitoring are advanced as a scalable and autonomous means of transforming data into information relevant to bridge risk management. Validation of the wireless monitoring system architecture is made using the Telegraph Road Bridge (Monroe, Michigan), a multi-girder short-span highway bridge that represents a major fraction of the U.S. national inventory.PhDCivil EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/116749/1/ocosean_1.pd

    Distributed signal processing using nested lattice codes

    No full text
    Multi-Terminal Source Coding (MTSC) addresses the problem of compressing correlated sources without communication links among them. In this thesis, the constructive approach of this problem is considered in an algebraic framework and a system design is provided that can be applicable in a variety of settings. Wyner-Ziv problem is first investigated: coding of an independent and identically distributed (i.i.d.) Gaussian source with side information available only at the decoder in the form of a noisy version of the source to be encoded. Theoretical models are first established and derived for calculating distortion-rate functions. Then a few novel practical code implementations are proposed by using the strategy of multi-dimensional nested lattice/trellis coding. By investigating various lattices in the dimensions considered, analysis is given on how lattice properties affect performance. Also proposed are methods on choosing good sublattices in multiple dimensions. By introducing scaling factors, the relationship between distortion and scaling factor is examined for various rates. The best high-dimensional lattice using our scale-rotate method can achieve a performance less than 1 dB at low rates from the Wyner-Ziv limit; and random nested ensembles can achieve a 1.87 dB gap with the limit. Moreover, the code design is extended to incorporate with distributed compressive sensing (DCS). Theoretical framework is proposed and practical design using nested lattice/trellis is presented for various scenarios. By using nested trellis, the simulation shows a 3.42 dB gap from our derived bound for the DCS plus Wyner-Ziv framework

    Design and Evaluation of Compression, Classification and Localization Schemes for Various IoT Applications

    Get PDF
    Nowadays we are surrounded by a huge number of objects able to communicate, read information such as temperature, light or humidity, and infer new information through ex- changing data. These kinds of objects are not limited to high-tech devices, such as desktop PC, laptop, new generation mobile phone, i.e. smart phone, and others with high capabilities, but also include commonly used object, such as ID cards, driver license, clocks, etc. that can made smart by allowing them to communicate. Thus, the analog world of just a few years ago is becoming the a digital world of the Inter- net of Things (IoT), where the information from a single object can be retrieved from the Internet. The IoT paradigm opens several architectural challenges, including self-organization, self-managing, self-deployment of the smart objects, as well as the problem of how to minimize the usage of the limited resources of each device. The concept of IoT covers a lot of communication paradigms such as WiFi, Radio Frequency Identification (RFID), and Wireless Sensor Network (WSN). Each paradigm can be thought of as an IoT island where each device can communicate directly with other devices. The thesis is divided in sections in order to cover each problem mentioned above. The first step is to understand the possibility to infer new knowledge from the deployed device in a scenario. For this reason, the research is focused on the web semantic, web 3.0, to assign a semantic meaning to each thing inside the architecture. The sole semantic concept is unusable to infer new information from the data gathered; in fact, it is necessary to organize the data through a hierarchical form defined by an Ontology. Through the exploitation of the Ontology, it is possible to apply semantic engine reasoners to infer new knowledge about the network. The second step of the dissertation deals with the minimization of the usage of every node in a WSN. The main purpose of each node is to collect environmental data and to exchange hem with other nodes. To minimize battery consumption, it is necessary to limit the radio usage. Therefore, we implemented Razor, a new lightweight algorithm which is expected to improve data compression and classification by leveraging on the advantages offered by data mining methods for optimizing communications and by enhancing information transmission to simplify data classification. Data compression is performed studying the well-know Vector Quantization (VQ) theory in order to create the codebooks necessary for signal compression. At the same time, it is requested to give a semantic meaning to un- known signals. In this way, the codebook feature is able not only to compress the signals, but also to classify unknown signals. Razor is compared with both state-of-the-art compression and signal classification techniques for WSN . The third part of the thesis covers the concept of smart object applied to Robotic research. A critical issue is how a robot can localize and retrieve smart objects in a real scenario without any prior knowledge. In order to achieve the objectives, it is possible to exploit the smart object concept and localize them through RSSI measurements. After the localization phase, the robot can exploit its own camera to retrieve the objects. Several filtering algorithms are developed in order to mitigate the multi–path issue due to the wireless communication channel and to achieve a better distance estimation through the RSSI measurement. The last part of the dissertation deals with the design and the development of a Cognitive Network (CN) testbed using off the shelf devices. The device type is chosen considering the cost, usability, configurability, mobility and possibility to modify the Operating System (OS) source code. Thus, the best choice is to select some devices based on Linux kernel as Android OS. The feature to modify the Operating System is required to extract the TCP/IP protocol stack parameters for the CN paradigm. It is necessary to monitor the network status in real-time and to modify the critical parameters in order to improve some performance, such as bandwidth consumption, number of hops to exchange the data, and throughput

    A Bayesian Analysis of Compressive Sensing Data Recovery in Wireless Sensor Networks

    No full text
    Abstract—In this paper we address the task of accurately reconstructing a distributed signal through the collection of a small number of samples at a data gathering point using Compressive Sensing (CS) in conjunction with Principal Component Analysis (PCA). Our scheme compresses in a distributed way real world non-stationary signals, recovering them at the data collection point through the online estimation of their spatial/temporal correlation structures. The proposed technique is hereby characterized under the framework of Bayesian estimation, showing under which assumptions it is equivalent to optimal maximum a posteriori (MAP) recovery. As the main contribution of this paper, we proceed with the analysis of data collected by our indoor wireless sensor network (WSN) testbed, proving that these assumptionsholdwithgoodaccuracyintheconsideredrealworld scenarios. This provides empirical evidence of the effectiveness of our approach and proves that CS is a legitimate tool for the recovery of real-world signals in WSNs. I
    corecore