534 research outputs found

    Data redundancy reduction for energy-efficiency in wireless sensor networks: a comprehensive review

    Get PDF
    Wireless Sensor Networks (WSNs) play a significant role in providing an extraordinary infrastructure for monitoring environmental variations such as climate change, volcanoes, and other natural disasters. In a hostile environment, sensors' energy is one of the crucial concerns in collecting and analyzing accurate data. However, various environmental conditions, short-distance adjacent devices, and extreme usage of resources, i.e., battery power in WSNs, lead to a high possibility of redundant data. Accordingly, the reduction in redundant data is required for both resources and accurate information. In this context, this paper presents a comprehensive review of the existing energy-efficient data redundancy reduction schemes with their benefits and limitations for WSNs. The entire concept of data redundancy reduction is classified into three levels, which are node, cluster head, and sink. Additionally, this paper highlights existing key issues and challenges and suggested future work in reducing data redundancy for future research

    Self-learning Anomaly Detection in Industrial Production

    Get PDF

    IPv6 multicast forwarding in RPL-based wireless sensor networks

    Get PDF
    In wireless sensor deployments, network layer multicast can be used to improve the bandwidth and energy efficiency for a variety of applications, such as service discovery or network management. However, despite efforts to adopt IPv6 in networks of constrained devices, multicast has been somewhat overlooked. The Multicast Forwarding Using Trickle (Trickle Multicast) internet draft is one of the most noteworthy efforts. The specification of the IPv6 routing protocol for low power and lossy networks (RPL) also attempts to address the area but leaves many questions unanswered. In this paper we highlight our concerns about both these approaches. Subsequently, we present our alternative mechanism, called stateless multicast RPL forwarding algorithm (SMRF), which addresses the aforementioned drawbacks. Having extended the TCP/IP engine of the Contiki embedded operating system to support both trickle multicast (TM) and SMRF, we present an in-depth comparison, backed by simulated evaluation as well as by experiments conducted on a multi-hop hardware testbed. Results demonstrate that SMRF achieves significant delay and energy efficiency improvements at the cost of a small increase in packet loss. The outcome of our hardware experiments show that simulation results were realistic. Lastly, we evaluate both algorithms in terms of code size and memory requirements, highlighting SMRF's low implementation complexity. Both implementations have been made available to the community for adoption

    Dynamic Measurements with Scanning Probe Microscopy: Surface Studies Using Nanostructured Test Platforms of Metalloporphyrins, Nanoparticles and Amyloid Fibrils

    Get PDF
    A hybrid imaging mode for characterization of magnetic nanomaterials has been developed, using atomic force microscopy (AFM) combined with electromagnetic sample actuation. Instead of using a coated AFM probe as a magnetic sensor; our strategy is to use a nonmagnetic probe with contact mode AFM to characterize the vibration of magnetic and superparamagnetic nanomaterials responding to the flux of an AC electromagnetic field. We refer to the hybrid imaging mode as magnetic sample modulation (MSM-AFM). An oscillating magnetic field is produced by applying an AC current to a wire coil solenoid placed under the sample stage for tuning selected parameters of driving frequency and strength of the magnetic field. When the AC field is on, the AFM probe is scanned in contact with the sample to sense periodic changes in the force and motion of vibrating nanomaterials. With MSM, responses of both the amplitude and phase signal along with spatial maps of the topography channel can be collected simultaneously. A requirement for MSM is that the samples can be free to vibrate, yet remain attached to the surface. Particle lithography was used to prepare well-defined test platforms of ring structures of magnetic or superparamagnetic nanomaterials. Capillary filling of polydimethylsiloxane (PDMS) molds was applied to generate stripes of FeNi3 nanoparticles with microscale dimensions as test platforms. The MSM-AFM imaging mode was used successfully to characterize nanomaterials of FeNi3 nanoparticles, cobalt nanoparticles, octa-substituted porphyrin nanocrystals and ionic liquid nanoGUMBOS with dimensions ranging from 1 to 200 nm. Dynamic MSM-AFM measurements can be obtained by placing the tip on a vibrating nanoparticle and sweeping the frequency or field strength. Changes in frequency spectra and vibrational amplitude can be mapped for nanoparticles of different sizes, shapes and composition. The MSM-AFM imaging mode provides a useful tool for investigating changes in size dependent magnetic properties of materials at the nanoscale. Samples of designed amyloid proteins were characterized ex situ using scanning probe microscopy. The progressive growth and fibrillization of amyloid â over extended time intervals was visualized with high resolution using AFM

    Wireless sensor network as a distribute database

    Get PDF
    Wireless sensor networks (WSN) have played a role in various fields. In-network data processing is one of the most important and challenging techniques as it affects the key features of WSNs, which are energy consumption, nodes life circles and network performance. In the form of in-network processing, an intermediate node or aggregator will fuse or aggregate sensor data, which are collected from a group of sensors before transferring to the base station. The advantage of this approach is to minimize the amount of information transferred due to lack of computational resources. This thesis introduces the development of a hybrid in-network data processing for WSNs to fulfil the WSNs constraints. An architecture for in-network data processing were proposed in clustering level, data compression level and data mining level. The Neighbour-aware Multipath Cluster Aggregation (NMCA) is designed in the clustering level, which combines cluster-based and multipath approaches to process different packet loss rates. The data compression schemes and Optimal Dynamic Huffman (ODH) algorithm compressed data in the cluster head for the compressed level. A semantic data mining for fire detection was designed for extracting information from the raw data by the semantic data-mining model is developed to improve data accuracy and extract the fire event in the simulation. A demo in-door location system with in-network data processing approach is built to test the performance of the energy reduction of our designed strategy. In conclusion, the added benefits that the technical work can provide for in-network data processing is discussed and specific contributions and future work are highlighted

    Scalable and fault-tolerant data stream processing on multi-core architectures

    Get PDF
    With increasing data volumes and velocity, many applications are shifting from the classical “process-after-store” paradigm to a stream processing model: data is produced and consumed as continuous streams. Stream processing captures latency-sensitive applications as diverse as credit card fraud detection and high-frequency trading. These applications are expressed as queries of algebraic operations (e.g., aggregation) over the most recent data using windows, i.e., finite evolving views over the input streams. To guarantee correct results, streaming applications require precise window semantics (e.g., temporal ordering) for operations that maintain state. While high processing throughput and low latency are performance desiderata for stateful streaming applications, achieving both poses challenges. Computing the state of overlapping windows causes redundant aggregation operations: incremental execution (i.e., reusing previous results) reduces latency but prevents parallelization; at the same time, parallelizing window execution for stateful operations with precise semantics demands ordering guarantees and state access coordination. Finally, streams and state must be recovered to produce consistent and repeatable results in the event of failures. Given the rise of shared-memory multi-core CPU architectures and high-speed networking, we argue that it is possible to address these challenges in a single node without compromising window semantics, performance, or fault-tolerance. In this thesis, we analyze, design, and implement stream processing engines (SPEs) that achieve high performance on multi-core architectures. To this end, we introduce new approaches for in-memory processing that address the previous challenges: (i) for overlapping windows, we provide a family of window aggregation techniques that enable computation sharing based on the algebraic properties of aggregation functions; (ii) for parallel window execution, we balance parallelism and incremental execution by developing abstractions for both and combining them to a novel design; and (iii) for reliable single-node execution, we enable strong fault-tolerance guarantees without sacrificing performance by reducing the required disk I/O bandwidth using a novel persistence model. We combine the above to implement an SPE that processes hundreds of millions of tuples per second with sub-second latencies. These results reveal the opportunity to reduce resource and maintenance footprint by replacing cluster-based SPEs with single-node deployments.Open Acces
    • …
    corecore