21 research outputs found

    Improving Temporal Coverage of an Energy-Efficient Data Extraction Algorithm for Environmental Monitoring Using Wireless Sensor Networks

    Get PDF
    Collecting raw data from a wireless sensor network for environmental monitoring applications can be a difficult task due to the high energy consumption involved. This is especially difficult when the application requires specialized sensors that have very high energy consumption, e.g. hydrological sensors for monitoring marine environments. This paper introduces a technique for reducing energy consumption by minimizing sensor sampling operations. In addition, we illustrate how a randomized algorithm can be used to improve temporal coverage such that the time between the occurrence of an event and its detection can be minimized. We evaluate our approach using real data collected from a sensor network deployment on the Great Barrier Reef

    Enabling Compression in Tiny Wireless Sensor Nodes

    Get PDF
    A Wireless Sensor Network (WSN) is a network composed of sensor nodes communicating among themselves and deployed in large scale (from tens to thousands) for applications such as environmental, habitat and structural monitoring, disaster management, equipment diagnostic, alarm detection, and target classification. In WSNs, typically, sensor nodes are randomly distributed over the area under observation with very high density. Each node is a small device able to collect information from the surrounding environment through one or more sensors, to elaborate this information locally and to communicate it to a data collection centre called sink or base station. WSNs are currently an active research area mainly due to the potential of their applications. However, the deployment of a large scale WSN still requires solutions to a number of technical challenges that stem primarily from the features of the sensor nodes such as limited computational power, reduced communication bandwidth and small storage capacity. Further, since sensor nodes are typically powered by batteries with a limited capacity, energy is a primary constraint in the design and deployment of WSNs. Datasheets of commercial sensor nodes show that data communication is very expensive in terms of energy consumption, whereas data processing consumes significantly less: the energy cost of receiving or transmitting a single bit of information is approximately the same as that required by the processing unit for executing a thousand operations. On the other hand, the energy consumption of the sensing unit depends on the specific sensor type. In several cases, however, it is negligible with respect to the energy consumed by the communication unit and sometimes also by the processing unit. Thus, to extend the lifetime of a WSN, most of the energy conservation schemes proposed in the literature aim to minimize the energy consumption of the communication unit (Croce et al., 2008). To achieve this objective, two main approaches have been followed: power saving through duty cycling and in-network processing. Duty cycling schemes define coordinated sleep/wakeup schedules among nodes in the network. A detailed description of these techniques applied to WSNs can be found in (Anastasi et al., 2009). On the other hand, in-network processing consists in reducing the amount of information to be transmitted by means of aggregation (Boulis et al., 2003) (Croce et al., 2008) (Di Bacco et al., 2004) (Fan et al., 2007)

    Experiments and analysis of quality andEnergy-aware data aggregation approaches inWSNs

    Get PDF
    A wireless sensor network consists of autonomous devices able to collect various data from the area that surrounds them. However, the resources associated with sensors are limited and, thus, in order to guarantee a longer life of all the network components, it is necessary to adopt energysavings methods. This paper, considering that the transmission phase is the main cause of energy dissipation, presents an approach aimed to save energy by capturing and aggregating signals instead of sending them in raw form. Anyway, aggregation should not imply the loss of useful data. For this reason, information about possible outliers is preserved and the aggregated values have to satisfy data quality (i.e., accuracy, precision, and timeliness) requirements. In order to show the correctness and validity of the proposed method, it has been tested on a real case study and its performance has been compared with two other consolidated approaches

    eSENSE: energy efficient stochastic sensing framework for wireless sensor platforms

    Get PDF

    Outlier-Aware Data Aggregation in Sensor Networks

    Full text link
    Abstract- In this paper we discuss a robust aggregation framework that can detect spurious measurements and refrain from incorporating them in the computed aggregate values. Our framework can consider different definitions of an outlier node, based on a specified minimum support. Our experimental evaluation demonstrates the benefits of our approach. I

    Efficiently Maintaining Distributed Model-Based Views on Real-Time Data Streams

    Get PDF
    Minimizing communication cost is a fundamental problem in large-scale federated sensor networks. Maintaining model-based views of data streams has been highlighted because it permits efficient data communication by transmitting parameter values of models, instead of original data streams. We propose a framework that employs the advantages of using model-based views for communication-efficient stream data processing over federated sensor networks, yet it significantly improves state-of-the-art approaches. The framework is generic and any time-parameterized models can be plugged, while accuracy guarantees for query results are ensured throughout the large-scale networks. In addition, we boost the performance of the framework by the coded model update that enables efficient model update from one node to another. It predetermines parameter values for the model, updates only identifiers of the parameter values, and compresses the identifiers by utilizing bitmaps. Moreover, we propose a correlation model, named coded inter-variable model, that merges the efficiency of the coded model update with that of correlation models. Empirical studies with real data demonstrate that our proposal achieves substantial amounts of communication reduction, outperforming state-of-the art methods

    Large Scale and Streaming Time Series Segmentation and Piece-Wise Approximation Extended Version

    Get PDF
    Abstract Segmenting a time series or approximating it with piecewise linear function is often needed when handling data in the time domain to detect outliers, clean data, detect events and more. The data varies from ECG signals, traffic monitors to stock prices and sensor networks. Modern data-sets of this type are large and in many cases are infinite in the sense that the data is a stream rather than a finite sample. Therefore, in order to segment it, an algorithm has to scale gracefully with the size of the data. Dynamic Programming (DP) can find the optimal segmentation, however, the DP approach has a complexity of O T 2 thus cannot handle datasets with millions of elements, nor can it handle streaming data. Therefore, various heuristics are used in practice to handle the data. This study shows that if the approximation measure has an inverse triangle inequality property (ITIP), the solution of the dynamic program can be computed in linear time and streaming data can be handled too. The ITIP is shown to hold in many cases of interest. The speedup due to the new algorithms is evaluated on a variety of data-sets to be in the range of 8 − 8200x over the DP solution without sacrificing accuracy. Confidence intervals for segmentations are derived as well
    corecore