570 research outputs found

    Energy-efficient information inference in wireless sensor networks based on graphical modeling

    Get PDF
    This dissertation proposes a systematic approach, based on a probabilistic graphical model, to infer missing observations in wireless sensor networks (WSNs) for sustaining environmental monitoring. This enables us to effectively address two critical challenges in WSNs: (1) energy-efficient data gathering through planned communication disruptions resulting from energy-saving sleep cycles, and (2) sensor-node failure tolerance in harsh environments. In our approach, we develop a pairwise Markov Random Field (MRF) to model the spatial correlations in a sensor network. Our MRF model is first constructed through automatic learning from historical sensed data, by using Iterative Proportional Fitting (IPF). When the MRF model is constructed, Loopy Belief Propagation (LBP) is then employed to perform information inference to estimate the missing data given incomplete network observations. The proposed approach is then improved in terms of energy-efficiency and robustness from three aspects: model building, inference and parameter learning. The model and methods are empirically evaluated using multiple real-world sensor network data sets. The results demonstrate the merits of our proposed approaches

    Self-Calibration Methods for Uncontrolled Environments in Sensor Networks: A Reference Survey

    Get PDF
    Growing progress in sensor technology has constantly expanded the number and range of low-cost, small, and portable sensors on the market, increasing the number and type of physical phenomena that can be measured with wirelessly connected sensors. Large-scale deployments of wireless sensor networks (WSN) involving hundreds or thousands of devices and limited budgets often constrain the choice of sensing hardware, which generally has reduced accuracy, precision, and reliability. Therefore, it is challenging to achieve good data quality and maintain error-free measurements during the whole system lifetime. Self-calibration or recalibration in ad hoc sensor networks to preserve data quality is essential, yet challenging, for several reasons, such as the existence of random noise and the absence of suitable general models. Calibration performed in the field, without accurate and controlled instrumentation, is said to be in an uncontrolled environment. This paper provides current and fundamental self-calibration approaches and models for wireless sensor networks in uncontrolled environments

    A distributed compressive sensing technique for data gathering in Wireless Sensor Networks

    Get PDF
    Compressive sensing is a new technique utilized for energy efficient data gathering in wireless sensor networks. It is characterized by its simple encoding and complex decoding. The strength of compressive sensing is its ability to reconstruct sparse or compressible signals from small number of measurements without requiring any a priori knowledge about the signal structure. Considering the fact that wireless sensor nodes are often deployed densely, the correlation among them can be utilized for further compression. By utilizing this spatial correlation, we propose a joint sparsity-based compressive sensing technique in this paper. Our approach employs Bayesian inference to build probabilistic model of the signals and thereafter applies belief propagation algorithm as a decoding method to recover the common sparse signal. The simulation results show significant gain in terms of signal reconstruction accuracy and energy consumption of our approach compared with existing approaches

    Thirty Years of Machine Learning: The Road to Pareto-Optimal Wireless Networks

    Full text link
    Future wireless networks have a substantial potential in terms of supporting a broad range of complex compelling applications both in military and civilian fields, where the users are able to enjoy high-rate, low-latency, low-cost and reliable information services. Achieving this ambitious goal requires new radio techniques for adaptive learning and intelligent decision making because of the complex heterogeneous nature of the network structures and wireless services. Machine learning (ML) algorithms have great success in supporting big data analytics, efficient parameter estimation and interactive decision making. Hence, in this article, we review the thirty-year history of ML by elaborating on supervised learning, unsupervised learning, reinforcement learning and deep learning. Furthermore, we investigate their employment in the compelling applications of wireless networks, including heterogeneous networks (HetNets), cognitive radios (CR), Internet of things (IoT), machine to machine networks (M2M), and so on. This article aims for assisting the readers in clarifying the motivation and methodology of the various ML algorithms, so as to invoke them for hitherto unexplored services as well as scenarios of future wireless networks.Comment: 46 pages, 22 fig

    A dependability framework for WSN-based aquatic monitoring systems

    Get PDF
    Wireless Sensor Networks (WSN) are being progressively used in several application areas, particularly to collect data and monitor physical processes. Moreover, sensor nodes used in environmental monitoring applications, such as the aquatic sensor networks, are often subject to harsh environmental conditions while monitoring complex phenomena. Non-functional requirements, like reliability, security or availability, are increasingly important and must be accounted for in the application development. For that purpose, there is a large body of knowledge on dependability techniques for distributed systems, which provides a good basis to understand how to satisfy these non-functional requirements of WSN-based monitoring applications. Given the data-centric nature of monitoring applications, it is of particular importance to ensure that data is reliable or, more generically, that it has the necessary quality. The problem of ensuring the desired quality of data for dependable monitoring using WSNs is studied herein. With a dependability-oriented perspective, it is reviewed the possible impairments to dependability and the prominent existing solutions to solve or mitigate these impairments. Despite the variety of components that may form a WSN-based monitoring system, it is given particular attention to understanding which faults can affect sensors, how they can affect the quality of the information, and how this quality can be improved and quantified. Open research issues for the specific case of aquatic monitoring applications are also discussed. One of the challenges in achieving a dependable system behavior is to overcome the external disturbances affecting sensor measurements and detect the failure patterns in sensor data. This is a particular problem in environmental monitoring, due to the difficulty in distinguishing a faulty behavior from the representation of a natural phenomenon. Existing solutions for failure detection assume that physical processes can be accurately modeled, or that there are large deviations that may be detected using coarse techniques, or more commonly that it is a high-density sensor network with value redundant sensors. This thesis aims at defining a new methodology for dependable data quality in environmental monitoring systems, aiming to detect faulty measurements and increase the sensors data quality. The framework of the methodology is overviewed through a generically applicable design, which can be employed to any environment sensor network dataset. The methodology is evaluated in various datasets of different WSNs, where it is used machine learning to model each sensor behavior, exploiting the existence of correlated data provided by neighbor sensors. It is intended to explore the data fusion strategies in order to effectively detect potential failures for each sensor and, simultaneously, distinguish truly abnormal measurements from deviations due to natural phenomena. This is accomplished with the successful application of the methodology to detect and correct outliers, offset and drifting failures in real monitoring networks datasets. In the future, the methodology can be applied to optimize the data quality control processes of new and already operating monitoring networks, and assist in the networks maintenance operations.As redes de sensores sem fios (RSSF) têm vindo cada vez mais a serem utilizadas em diversas áreas de aplicação, em especial para monitorizar e capturar informação de processos físicos em meios naturais. Neste contexto, os sensores que estão em contacto direto com o respectivo meio ambiente, como por exemplo os sensores em meios aquáticos, estão sujeitos a condições adversas e complexas durante o seu funcionamento. Esta complexidade conduz à necessidade de considerarmos, durante o desenvolvimento destas redes, os requisitos não funcionais da confiabilidade, da segurança ou da disponibilidade elevada. Para percebermos como satisfazer estes requisitos da monitorização com base em RSSF para aplicações ambientais, já existe uma boa base de conhecimento sobre técnicas de confiabilidade em sistemas distribuídos. Devido ao foco na obtenção de dados deste tipo de aplicações de RSSF, é particularmente importante garantir que os dados obtidos na monitorização sejam confiáveis ou, de uma forma mais geral, que tenham a qualidade necessária para o objetivo pretendido. Esta tese estuda o problema de garantir a qualidade de dados necessária para uma monitorização confiável usando RSSF. Com o foco na confiabilidade, revemos os possíveis impedimentos à obtenção de dados confiáveis e as soluções existentes capazes de corrigir ou mitigar esses impedimentos. Apesar de existir uma grande variedade de componentes que formam ou podem formar um sistema de monitorização com base em RSSF, prestamos particular atenção à compreensão das possíveis faltas que podem afetar os sensores, a como estas faltas afetam a qualidade dos dados recolhidos pelos sensores e a como podemos melhorar os dados e quantificar a sua qualidade. Tendo em conta o caso específico dos sistemas de monitorização em meios aquáticos, discutimos ainda as várias linhas de investigação em aberto neste tópico. Um dos desafios para se atingir um sistema de monitorização confiável é a deteção da influência de fatores externos relacionados com o ambiente monitorizado, que afetam as medições obtidas pelos sensores, bem como a deteção de comportamentos de falha nas medições. Este desafio é um problema particular na monitorização em ambientes naturais adversos devido à dificuldade da distinção entre os comportamentos associados às falhas nos sensores e os comportamentos dos sensores afetados pela à influência de um evento natural. As soluções existentes para este problema, relacionadas com deteção de faltas, assumem que os processos físicos a monitorizar podem ser modelados de forma eficaz, ou que os comportamentos de falha são caraterizados por desvios elevados do comportamento expectável de forma a serem facilmente detetáveis. Mais frequentemente, as soluções assumem que as redes de sensores contêm um número suficientemente elevado de sensores na área monitorizada e, consequentemente, que existem sensores redundantes relativamente à medição. Esta tese tem como objetivo a definição de uma nova metodologia para a obtenção de qualidade de dados confiável em sistemas de monitorização ambientais, com o intuito de detetar a presença de faltas nas medições e aumentar a qualidade dos dados dos sensores. Esta metodologia tem uma estrutura genérica de forma a ser aplicada a uma qualquer rede de sensores ambiental ou ao respectivo conjunto de dados obtido pelos sensores desta. A metodologia é avaliada através de vários conjuntos de dados de diferentes RSSF, em que aplicámos técnicas de aprendizagem automática para modelar o comportamento de cada sensor, com base na exploração das correlações existentes entre os dados obtidos pelos sensores da rede. O objetivo é a aplicação de estratégias de fusão de dados para a deteção de potenciais falhas em cada sensor e, simultaneamente, a distinção de medições verdadeiramente defeituosas de desvios derivados de eventos naturais. Este objectivo é cumprido através da aplicação bem sucedida da metodologia para detetar e corrigir outliers, offsets e drifts em conjuntos de dados reais obtidos por redes de sensores. No futuro, a metodologia pode ser aplicada para otimizar os processos de controlo da qualidade de dados quer de novos sistemas de monitorização, quer de redes de sensores já em funcionamento, bem como para auxiliar operações de manutenção das redes.Laboratório Nacional de Engenharia Civi

    Energy-aware Sparse Sensing of Spatial-temporally Correlated Random Fields

    Get PDF
    This dissertation focuses on the development of theories and practices of energy aware sparse sensing schemes of random fields that are correlated in the space and/or time domains. The objective of sparse sensing is to reduce the number of sensing samples in the space and/or time domains, thus reduce the energy consumption and complexity of the sensing system. Both centralized and decentralized sensing schemes are considered in this dissertation. Firstly we study the problem of energy efficient Level set estimation (LSE) of random fields correlated in time and/or space under a total power constraint. We consider uniform sampling schemes of a sensing system with a single sensor and a linear sensor network with sensors distributed uniformly on a line where sensors employ a fixed sampling rate to minimize the LSE error probability in the long term. The exact analytical cost functions and their respective upper bounds of these sampling schemes are developed by using an optimum thresholding-based LSE algorithm. The design parameters of these sampling schemes are optimized by minimizing their respective cost functions. With the analytical results, we can identify the optimum sampling period and/or node distance that can minimize the LSE error probability. Secondly we propose active sparse sensing schemes with LSE of a spatial-temporally correlated random field by using a limited number of spatially distributed sensors. In these schemes a central controller is designed to dynamically select a limited number of sensing locations according to the information revealed from past measurements,and the objective is to minimize the expected level set estimation error.The expected estimation error probability is explicitly expressed as a function of the selected sensing locations, and the results are used to formulate the optimal sensing location selection problem as a combinatorial problem. Two low complexity greedy algorithms are developed by using analytical upper bounds of the expected estimation error probability. Lastly we study the distributed estimations of a spatially correlated random field with decentralized wireless sensor networks (WSNs). We propose a distributed iterative estimation algorithm that defines the procedures for both information propagation and local estimation in each iteration. The key parameters of the algorithm, including an edge weight matrix and a sample weight matrix, are designed by following the asymptotically optimum criteria. It is shown that the asymptotically optimum performance can be achieved by distributively projecting the measurement samples into a subspace related to the covariance matrices of data and noise samples
    corecore