49 research outputs found

    Machine Learning in Wireless Sensor Networks: Algorithms, Strategies, and Applications

    Get PDF
    Wireless sensor networks monitor dynamic environments that change rapidly over time. This dynamic behavior is either caused by external factors or initiated by the system designers themselves. To adapt to such conditions, sensor networks often adopt machine learning techniques to eliminate the need for unnecessary redesign. Machine learning also inspires many practical solutions that maximize resource utilization and prolong the lifespan of the network. In this paper, we present an extensive literature review over the period 2002-2013 of machine learning methods that were used to address common issues in wireless sensor networks (WSNs). The advantages and disadvantages of each proposed algorithm are evaluated against the corresponding problem. We also provide a comparative guide to aid WSN designers in developing suitable machine learning solutions for their specific application challenges.Comment: Accepted for publication in IEEE Communications Surveys and Tutorial

    Data aggregation in wireless sensor networks

    Get PDF
    Energy efficiency is an important metric in resource constrained wireless sensor networks (WSN). Multiple approaches such as duty cycling, energy optimal scheduling, energy aware routing and data aggregation can be availed to reduce energy consumption throughout the network. This thesis addresses the data aggregation during routing since the energy expended in transmitting a single data bit is several orders of magnitude higher than it is required for a single 32 bit computation. Therefore, in the first paper, a novel nonlinear adaptive pulse coded modulation-based compression (NADPCMC) scheme is proposed for data aggregation. A rigorous analytical development of the proposed scheme is presented by using Lyapunov theory. Satisfactory performance of the proposed scheme is demonstrated when compared to the available compression schemes in NS-2 environment through several data sets. Data aggregation is achieved by iteratively applying the proposed compression scheme at the cluster heads. The second paper on the other hand deals with the hardware verification of the proposed data aggregation scheme in the presence of a Multi-interface Multi-Channel Routing Protocol (MMCR). Since sensor nodes are equipped with radios that can operate on multiple non-interfering channels, bandwidth availability on each channel is used to determine the appropriate channel for data transmission, thus increasing the throughput. MMCR uses a metric defined by throughput, end-to-end delay and energy utilization to select Multi-Point Relay (MPR) nodes to forward data packets in each channel while minimizing packet losses due to interference. Further, the proposed compression and aggregation are performed to further improve the energy savings and network lifetime --Abstract, page iv

    Metrics to evaluate compressions algorithms for RAW SAR data

    Get PDF
    Modern synthetic aperture radar (SAR) systems have size, weight, power and cost (SWAP-C) limitations since platforms are becoming smaller, while SAR operating modes are becoming more complex. Due to the computational complexity of the SAR processing required for modern SAR systems, performing the processing on board the platform is not a feasible option. Thus, SAR systems are producing an ever-increasing volume of data that needs to be transmitted to a ground station for processing. Compression algorithms are utilised to reduce the data volume of the raw data. However, these algorithms can cause degradation and losses that may degrade the effectiveness of the SAR mission. This study addresses the lack of standardised quantitative performance metrics to objectively quantify the performance of SAR data-compression algorithms. Therefore, metrics were established in two different domains, namely the data domain and the image domain. The data-domain metrics are used to determine the performance of the quantisation and the associated losses or errors it induces in the raw data samples. The image-domain metrics evaluate the quality of the SAR image after SAR processing has been performed. In this study three well-known SAR compression algorithms were implemented and applied to three real SAR data sets that were obtained from a prototype airborne SAR system. The performance of these algorithms were evaluated using the proposed metrics. Important metrics in the data domain were found to be the compression ratio, the entropy, statistical parameters like the skewness and kurtosis to measure the deviation from the original distributions of the uncompressed data, and the dynamic range. The data histograms are an important visual representation of the effects of the compression algorithm on the data. An important error measure in the data domain is the signal-to-quantisation-noise ratio (SQNR), and the phase error for applications where phase information is required to produce the output. Important metrics in the image domain include the dynamic range, the impulse response function, the image contrast, as well as the error measure, signal-to-distortion-noise ratio (SDNR). The metrics suggested that all three algorithms performed well and are thus well suited for the compression of raw SAR data. The fast Fourier transform block adaptive quantiser (FFT-BAQ) algorithm had the overall best performance, but the analysis of the computational complexity of its compression steps, indicated that it is has the highest level of complexity compared to the other two algorithms. Since different levels of degradation are acceptable for different SAR applications, a trade-off can be made between the data reduction and the degradation caused by the algorithm. Due to SWAP-C limitations, there also remains a trade-off between the performance and the computational complexity of the compression algorithm.Dissertation (MEng)--University of Pretoria, 2019.Electrical, Electronic and Computer EngineeringMEngUnrestricte

    Genetic algorithm and tabu search approaches to quantization for DCT-based image compression

    Get PDF
    Today there are several formal and experimental methods for image compression, some of which have grown to be incorporated into the Joint Photographers Experts Group (JPEG) standard. Of course, many compression algorithms are still used only for experimentation mainly due to various performance issues. Lack of speed while compressing or expanding an image, poor compression rate, and poor image quality after expansion are a few of the most popular reasons for skepticism about a particular compression algorithm. This paper discusses current methods used for image compression. It also gives a detailed explanation of the discrete cosine transform (DCT), used by JPEG, and the efforts that have recently been made to optimize related algorithms. Some interesting articles regarding possible compression enhancements will be noted, and in association with these methods a new implementation of a JPEG-like image coding algorithm will be outlined. This new technique involves adapting between one and sixteen quantization tables for a specific image using either a genetic algorithm (GA) or tabu search (TS) approach. First, a few schemes including pixel neighborhood and Kohonen self-organizing map (SOM) algorithms will be examined to find their effectiveness at classifying blocks of edge-detected image data. Next, the GA and TS algorithms will be tested to determine their effectiveness at finding the optimum quantization table(s) for a whole image. A comparison of the techniques utilized will be thoroughly explored

    In-Vitro Biological Tissue State Monitoring based on Impedance Spectroscopy

    Get PDF
    The relationship between post-mortem state and changes of biological tissue impedance has been investigated to serve as a basis for developing an in-vitro measurement method for monitoring the freshness of meat. The main challenges thereby are the reproducible measurement of the impedance of biological tissues and the classification method of their type and state. In order to realize reproducible tissue bio-impedance measurements, a suitable sensor taking into account the anisotropy of the biological tissue has been developed. It consists of cylindrical penetrating multi electrodes realizing good contacts between electrodes and the tissue. Experimental measurements have been carried out with different tissues and for a long period of time in order to monitor the state degradation with time. Measured results have been evaluated by means of the modified Fricke-Cole-Cole model. Results are reproducible and correspond to the expected behavior due to aging. An appropriate method for feature extraction and classification has been proposed using model parameters as features as input for classification using neural networks and fuzzy logic. A Multilayer Perceptron neural network (MLP) has been proposed for muscle type computing and the age computing and respectively freshness state of the meat. The designed neural network is able to generalize and to correctly classify new testing data with a high performance index of recognition. It reaches successful results of test equal to 100% for 972 created inputs for each muscle. An investigation of the influence of noise on the classification algorithm shows, that the MLP neural network has the ability to correctly classify the noisy testing inputs especially when the parameter noise is less than 0.6%. The success of classification is 100% for the muscles Longissimus Dorsi (LD) of beef, Semi-Membraneous (SM) of beef and Longissimus Dorsi (LD) of veal and 92.3% for the muscle Rectus Abdominis (RA) of veal. Fuzzy logic provides a successful alternative for easy classification. Using the Gaussian membership functions for the muscle type detection and trapezoidal member function for the classifiers related to the freshness detection, fuzzy logic realized an easy method of classification and generalizes correctly the inputs to the corresponding classes with a high level of recognition equal to 100% for meat type detection and with high accuracy for freshness computing equal to 84.62% for the muscle LD beef, 92.31 % for the muscle RA beef, 100 % for the muscle SM veal and 61.54% for the muscle LD veal.  Auf der Basis von Impedanzspektroskopie wurde ein neuartiges in-vitro-Messverfahren zur Überwachung der Frische von biologischem Gewebe entwickelt. Die wichtigsten Herausforderungen stellen dabei die Reproduzierbarkeit der Impedanzmessung und die Klassifizierung der Gewebeart sowie dessen Zustands dar. FĂŒr die Reproduzierbarkeit von Impedanzmessungen an biologischen Geweben, wurde ein zylindrischer Multielektrodensensor realisiert, der die 2D-Anisotropie des Gewebes berĂŒcksichtigt und einen guten Kontakt zum Gewebe realisiert. Experimentelle Untersuchungen wurden an verschiedenen Geweben ĂŒber einen lĂ€ngeren Zeitraum durchgefĂŒhrt und mittels eines modifizierten Fricke-Cole-Cole-Modells analysiert. Die Ergebnisse sind reproduzierbar und entsprechen dem physikalisch-basierten erwarteten Verhalten. Als Merkmale fĂŒr die Klassifikation wurden die Modellparameter genutzt

    Multi-image classification and compression using vector quantization

    Get PDF
    Vector Quantization (VQ) is an image processing technique based on statistical clustering, and designed originally for image compression. In this dissertation, several methods for multi-image classification and compression based on a VQ design are presented. It is demonstrated that VQ can perform joint multi-image classification and compression by associating a class identifier with each multi-spectral signature codevector. We extend the Weighted Bayes Risk VQ (WBRVQ) method, previously used for single-component images, that explicitly incorporates a Bayes risk component into the distortion measure used in the VQ quantizer design and thereby permits a flexible trade-off between classification and compression priorities. In the specific case of multi-spectral images, we investigate the application of the Multi-scale Retinex algorithm as a preprocessing stage, before classification and compression, that performs dynamic range compression, reduces the dependence on lighting conditions, and generally enhances apparent spatial resolution. The goals of this research are four-fold: (1) to study the interrelationship between statistical clustering, classification and compression in a multi-image VQ context; (2) to study mixed-pixel classification and combined classification and compression for simulated and actual, multispectral and hyperspectral multi-images; (3) to study the effects of multi-image enhancement on class spectral signatures; and (4) to study the preservation of scientific data integrity as a function of compression. In this research, a key issue is not just the subjective quality of the resulting images after classification and compression but also the effect of multi-image dimensionality on the complexity of the optimal coder design

    Natural Disaster Detection Using Wavelet and Artificial Neural Network

    Get PDF
    Indonesia, by the location of its geographic and geologic, it have more potential encounters for natural disasters. This nation is traversed by three tectonic plates, namely: IndoAustralian, the Eurasian and the Pacific plates. One of the tools employed to detect danger and send an early disaster warning is sensor device for ocean waves, but it has drawbacks related to the very limited time gap between information/warnings obtained and the real disaster event, which is only less than 30 minutes. Natural disaster early detection information system is essential to prevent potential danger. The system can make use of the pattern recognition of satellite imagery sequences that take place before and during the natural disaster. This study is conducted to determine the right wavelet to compress the satellite image sequences and to perform the pattern recognition process of a natural disaster employing an artificial neural network. This study makes use of satellite imagery sequences of tornadoes and hurricanes

    Advanced Process Monitoring for Industry 4.0

    Get PDF
    This book reports recent advances on Process Monitoring (PM) to cope with the many challenges raised by the new production systems, sensors and “extreme data” conditions that emerged with Industry 4.0. Concepts such as digital-twins and deep learning are brought to the PM arena, pushing forward the capabilities of existing methodologies to handle more complex scenarios. The evolution of classical paradigms such as Latent Variable modeling, Six Sigma and FMEA are also covered. Applications span a wide range of domains such as microelectronics, semiconductors, chemicals, materials, agriculture, as well as the monitoring of rotating equipment, combustion systems and membrane separation processes

    Metrics to evaluate compressions algorithms for RAW SAR data

    Get PDF
    Modern synthetic aperture radar (SAR) systems have size, weight, power and cost (SWAP-C) limitations since platforms are becoming smaller, while SAR operating modes are becoming more complex. Due to the computational complexity of the SAR processing required for modern SAR systems, performing the processing on board the platform is not a feasible option. Thus, SAR systems are producing an ever-increasing volume of data that needs to be transmitted to a ground station for processing. Compression algorithms are utilised to reduce the data volume of the raw data. However, these algorithms can cause degradation and losses that may degrade the effectiveness of the SAR mission. This study addresses the lack of standardised quantitative performance metrics to objectively quantify the performance of SAR data-compression algorithms. Therefore, metrics were established in two different domains, namely the data domain and the image domain. The data-domain metrics are used to determine the performance of the quantisation and the associated losses or errors it induces in the raw data samples. The image-domain metrics evaluate the quality of the SAR image after SAR processing has been performed. In this study three well-known SAR compression algorithms were implemented and applied to three real SAR data sets that were obtained from a prototype airborne SAR system. The performance of these algorithms were evaluated using the proposed metrics. Important metrics in the data domain were found to be the compression ratio, the entropy, statistical parameters like the skewness and kurtosis to measure the deviation from the original distributions of the uncompressed data, and the dynamic range. The data histograms are an important visual representation of the effects of the compression algorithm on the data. An important error measure in the data domain is the signal-to-quantisation-noise ratio (SQNR), and the phase error for applications where phase information is required to produce the output. Important metrics in the image domain include the dynamic range, the impulse response function, the image contrast, as well as the error measure, signal-to-distortion-noise ratio (SDNR). The metrics suggested that all three algorithms performed well and are thus well suited for the compression of raw SAR data. The fast Fourier transform block adaptive quantiser (FFT-BAQ) algorithm had the overall best performance, but the analysis of the computational complexity of its compression steps, indicated that it is has the highest level of complexity compared to the other two algorithms. Since different levels of degradation are acceptable for different SAR applications, a trade-off can be made between the data reduction and the degradation caused by the algorithm. Due to SWAP-C limitations, there also remains a trade-off between the performance and the computational complexity of the compression algorithm.Dissertation (MEng)--University of Pretoria, 2019.TM2019Electrical, Electronic and Computer EngineeringMEngUnrestricte
    corecore