23 research outputs found

    An Adaptive Lossless Data Compression Scheme for Wireless Sensor Networks

    Get PDF
    Energy is an important consideration in the design and deployment of wireless sensor networks (WSNs) since sensor nodes are typically powered by batteries with limited capacity. Since the communication unit on a wireless sensor node is the major power consumer, data compression is one of possible techniques that can help reduce the amount of data exchanged between wireless sensor nodes resulting in power saving. However, wireless sensor networks possess significant limitations in communication, processing, storage, bandwidth, and power. Thus, any data compression scheme proposed for WSNs must be lightweight. In this paper, we present an adaptive lossless data compression (ALDC) algorithm for wireless sensor networks. Our proposed ALDC scheme performs compression losslessly using multiple code options. Adaptive compression schemes allow compression to dynamically adjust to a changing source. The data sequence to be compressed is partitioned into blocks, and the optimal compression scheme is applied for each block. Using various real-world sensor datasets we demonstrate the merits of our proposed compression algorithm in comparison with other recently proposed lossless compression algorithms for WSNs

    COMPRESSION OF WEARABLE BODY SENSOR NETWORK DATA USING IMPROVED TWO-THRESHOLD-TWO-DIVISOR DATA CHUNKING ALGORITHM

    Get PDF
    Compression plays a significant role in Body Sensor Networks (BSN) data since the sensors in BSNs have limited battery power and memory. Also, data needs to be transmitted fast and in a lossless manner to provide near real-time feedback. The paper evaluates lossless data compression algorithms like Run Length Encoding (RLE), Lempel Zev Welch (LZW) and Huffman on data from wearable devices and compares them in terms of Compression Ratio, Compression Factor, Savings Percentage and Compression Time. It also evaluates a data deduplication technique used for Low Bandwidth File Systems (LBFS) named Two Thresholds Two Divisors (TTTD) algorithm to determine if it could be used for BSN data. By changing the parameters and running the algorithm multiple times on the data, it arrives at a set of values that give \u3e50 compression ratio on BSN data. This is the first value of the paper. Based on these performance evaluation results of TTTD and various classical compression algorithms, it proposes a technique to combine multiple algorithms in sequence. Upon comparison of the performance, it has been found that the new algorithm, TTTD-H, which does TTTD and Huffman in sequence, improves the Savings Percentage by 23 percent over TTTD, and 31 percent over Huffman when executed independently. Compression Factor improved by 142 percent over TTTD, 52 percent over LZW, 178 percent over Huffman for a file of 3.5 MB. These significant results are the second important value of the project

    MICCS: A Novel Framework for Medical Image Compression Using Compressive Sensing

    Get PDF
    The vision of some particular applications such as robot-guided remote surgery where the image of a patient body will need to be captured by the smart visual sensor and to be sent on a real-time basis through a network of high bandwidth but yet limited. The particular problem considered for the study is to develop a mechanism of a hybrid approach of compression where the Region-of-Interest (ROI) should be compressed with lossless compression techniques and Non-ROI should be compressed with Compressive Sensing (CS) techniques. So the challenge is gaining equal image quality for both ROI and Non-ROI while overcoming optimized dimension reduction by sparsity into Non-ROI. It is essential to retain acceptable visual quality to Non-ROI compressed region to obtain a better reconstructed image. This step could bridge the trade-off between image quality and traffic load. The study outcomes were compared with traditional hybrid compression methods to find that proposed method achieves better compression performance as compared to conventional hybrid compression techniques on the performances parameters e.g. PSNR, MSE, and Compression Ratio

    Improving energy efficiency in a wireless sensor network by combining cooperative MIMO with data aggregation

    Get PDF
    In wireless sensor networks where nodes are powered by batteries, it is critical to prolong the network lifetime by minimizing the energy consumption of each node. In this paper, the cooperative multiple-input-multiple-output (MIMO) and data-aggregation techniques are jointly adopted to reduce the energy consumption per bit in wireless sensor networks by reducing the amount of data for transmission and better using network resources through cooperative communication. For this purpose, we derive a new energy model that considers the correlation between data generated by nodes and the distance between them for a cluster-based sensor network by employing the combined techniques. Using this model, the effect of the cluster size on the average energy consumption per node can be analyzed. It is shown that the energy efficiency of the network can significantly be enhanced in cooperative MIMO systems with data aggregation, compared with either cooperative MIMO systems without data aggregation or data-aggregation systems without cooperative MIMO, if sensor nodes are properly clusterized. Both centralized and distributed data-aggregation schemes for the cooperating nodes to exchange and compress their data are also proposed and appraised, which lead to diverse impacts of data correlation on the energy performance of the integrated cooperative MIMO and data-aggregation systems

    Algoritmo para la compresión de imágenes basado en algoritmo LZW y RLE

    Get PDF
    Esta investigación contó con el siguiente problema de investigación: ¿Qué algoritmo presenta la mejor tasa de compresión, tiempo de compresión y descompresión en imágenes entre los algoritmos IBK, LZW y RLE? El objetivo de esta investigación fue determinar qué algoritmo presenta la mejor tasa de compresión, tiempo de compresión y descompresión en imágenes entre los algoritmos IBK, LZW y RLE. Esta investigación tuvo un enfoque cuantitativo con un diseño experimental y un tipo de diseño preexperimental. Para lo cual se tomaron 150 imágenes en formato sin pérdida y se les aplicó cada uno de los algoritmos mencionados. Para medir los resultados se utilizó una ficha de recolección de datos en las que se llevó un registro del tamaño original de la imagen, el tamaño comprimido el tiempo de compresión y el tiempo de descompresión. A partir de esto, se obtuvo que el algoritmo IBK obtuvo mejor tasa de compresión que los demás algoritmos, pero no se llegó a tener mejor tiempo de compresión ni un mejor tiempo de descompresión

    Robust data protection and high efficiency for IoTs streams in the cloud

    Get PDF
    Remotely generated streaming of the Internet of Things (IoTs) data has become a vital category upon which many applications rely. Smart meters collect readings for household activities such as power and gas consumption every second - the readings are transmitted wirelessly through various channels and public hops to the operation centres. Due to the unusually large streams sizes, the operation centres are using cloud servers where various entities process the data on a real-time basis for billing and power management. It is possible that smart pipe projects (where oil pipes are continuously monitored using sensors) and collected streams are sent to the public cloud for real-time flawed detection. There are many other similar applications that can render the world a convenient place which result in climate change mitigation and transportation improvement to name a few. Despite the obvious advantages of these applications, some unique challenges arise posing some questions regarding a suitable balance between guaranteeing the streams security, such as privacy, authenticity and integrity, while not hindering the direct operations on those streams, while also handling data management issues, such as the volume of protected streams during transmission and storage. These challenges become more complicated when the streams reside on third-party cloud servers. In this thesis, a few novel techniques are introduced to address these problems. We begin by protecting the privacy and authenticity of transmitted readings without disrupting the direct operations. We propose two steganography techniques that rely on different mathematical security models. The results look promising - security: only the approved party who has the required security tokens can retrieve the hidden secret, and distortion effect with the difference between the original and protected readings that are almost at zero. This means the streams can be used in their protected form at intermediate hops or third party servers. We then improved the integrity of the transmitted protected streams which are prone to intentional or unintentional noise - we proposed a secure error detection and correction based stenographic technique. This allows legitimate recipients to (1) detect and recover any noise loss from the hidden sensitive information without privacy disclosure, and (2) remedy the received protected readings by using the corrected version of the secret hidden data. It is evident from the experiments that our technique has robust recovery capabilities (i.e. Root Mean Square (RMS) <0.01%, Bit Error Rate (BER) = 0 and PRD < 1%). To solve the issue of huge transmitted protected streams, two compression algorithms for lossless IoTs readings are introduced to ensure the volume of protected readings at intermediate hops is reduced without revealing the hidden secrets. The first uses Gaussian approximation function to represent IoTs streams in a few parameters regardless of the roughness in the signal. The second reduces the randomness of the IoTs streams into a smaller finite field by splitting to enhance repetition and avoiding the floating operations round errors issues. Under the same conditions, our both techniques were superior to existing models mathematically (i.e. the entropy was halved) and empirically (i.e. achieved ratio was 3.8:1 to 4.5:1). We were driven by the question ‘Can the size of multi-incoming compressed protected streams be re-reduced on the cloud without decompression?’ to overcome the issue of vast quantities of compressed and protected IoTs streams on the cloud. A novel lossless size reduction algorithm was introduced to prove the possibility of reducing the size of already compressed IoTs protected readings. This is successfully achieved by employing similarity measurements to classify the compressed streams into subsets in order to reduce the effect of uncorrelated compressed streams. The values of every subset was treated independently for further reduction. Both mathematical and empirical experiments proved the possibility of enhancing the entropy (i.e. almost reduced by 50%) and the resultant size reduction (i.e. up to 2:1)

    Compression vs Transmission Tradeoffs for Energy Harvesting Sensor Networks

    Get PDF
    The operation of Energy Harvesting Wireless Sensor Networks (EHWSNs) is a very lively area of research. This is due to the increasing inclination toward green systems, in order to reduce the energy consumption of human activities at large and to the desire of designing networks that can last unattended indefinitely (see, e.g., the nodes employed in Wireless Sensor Networks, WSNs). Notably, despite recent technological advances, batteries are expected to last for less than ten years for many applications and their replacement is often prohibitively expensive. This problem is particularly severe for urban sensing applications, think of, e.g., sensors placed below the street level to sense the presence of cars in parking lots, where the installation of new power cables is impractical. Other examples include body sensor networks or WSNs deployed in remote geographic areas. In contrast, EHWNs powered by energy scavenging devices (renewable power) provide potentially maintenance-free perpetual network operation, which is particularly appealing, especially for highly pervasive Internet of Things. Lossy temporal compression has been widely recognized as key for Energy Constrained Wireless Sensor Networks (WSN), where the imperfect reconstruction of the signal is often acceptable at the data collector, subject to some maximum error tolerance. The first part of this thesis deals with the evaluation of a number of lossy compression methods from the literature, and the analysis of their performance in terms of compression efficiency, computational complexity and energy consumption. Specifically, as a first step, a performance evaluation of existing and new compression schemes, considering linear, autoregressive, FFT-/DCT- and Wavelet-based models is carried out, by looking at their performance as a function of relevant signal statistics. After that, closed form expressions for their overall energy consumption and signal representation accuracy are obtained through numerical fittings. Lastly, the benefits that lossy compression methods bring about in interference-limited multi-hop networks are evaluated. In this scenario the channel access is a source of inefficiency due to collisions and transmission scheduling. The results reveal that the DCT-based schemes are the best option in terms of compression efficiency but are inefficient in terms of energy consumption. Instead, linear methods lead to substantial savings in terms of energy expenditure by, at the same time, leading to satisfactory compression ratios, reduced network delay and increased reliability performance. The subsequent part of the thesis copes with the problem of energy management for EHWSNs where sensor batteries are recharged via the energy harvested through a solar panel and sensors can choose to compress data before transmission. A scenario where a single node communicates with a single receiver is considered. The task of the node is to periodically sense some physical signal and report the measurements to the receiver (sink). We assume that this task is delay tolerant, i.e., the sensor can store a certain number of measurements in the memory buffer and send one or more packets of data after some time. Since most physical signals exhibit strong temporal correlation, the data in the buffer can often be compressed by means of a lossy compression method in order to reduce the amount of data to be sent. Lossy compression schemes allow us to select the compression ratio and trade some accuracy in the data reconstruction at the receiver for more energy savings at the transmitter. Specifically, our objective is to obtain the policy, i.e., the set of decision rules that describe the node behavior, that jointly maximizes throughput and reconstruction fidelity at the sink while meeting some predefined energy constraints, e.g., the battery charge level should never go below a guard threshold. To obtain this policy, the system is modeled as a Constrained Markov Decision Process (CMDP), and solved through Lagrangian Relaxation and Value Iteration Algorithm. The optimal policies are then compared with heuristic policies in different energy budget scenarios. Moreover the impact of the delay on the knowledge of the Channel State Information is investigated. Two more parts of this thesis deal with the development of models for the generation of space-time correlated signals and for the description of the energy harvested by outdoor photovoltaic panels. The former are very useful to prove the effectiveness of the proposed data gathering solutions as they can be used in the design of accurate simulation tools for WSNs. In addition, they can also be considered as reference models to prove theoretical results for data gathering or compression algorithms. The latter are especially useful in the investigation and in the optimization of EHWSNs. These models will be presented at the beginning and then intensively used for the analysis and the performance evaluation of the schemes that are treated in the remainder of the thesis

    Data Compression Techniques in Wireless Sensor Networks

    Get PDF
    corecore