23,960 research outputs found

    Early forest fire detection by vision-enabled wireless sensor networks

    Get PDF
    Wireless sensor networks constitute a powerful technology particularly suitable for environmental monitoring. With regard to wildfires, they enable low-cost fine-grained surveillance of hazardous locations like wildland-urban interfaces. This paper presents work developed during the last 4 years targeting a vision-enabled wireless sensor network node for the reliable, early on-site detection of forest fires. The tasks carried out ranged from devising a robust vision algorithm for smoke detection to the design and physical implementation of a power-efficient smart imager tailored to the characteristics of such an algorithm. By integrating this smart imager with a commercial wireless platform, we endowed the resulting system with vision capabilities and radio communication. Numerous tests were arranged in different natural scenarios in order to progressively tune all the parameters involved in the autonomous operation of this prototype node. The last test carried out, involving the prescribed burning of a 95 x 20-m shrub plot, confirmed the high degree of reliability of our approach in terms of both successful early detection and a very low false-alarm rate. Journal compilationMinisterio de Ciencia e Innovación TEC2009-11812, IPT-2011-1625-430000Office of Naval Research (USA) N000141110312Centro para el Desarrollo Tecnológico e Industrial IPC-2011100

    Focal-plane generation of multi-resolution and multi-scale image representation for low-power vision applications

    Get PDF
    Early vision stages represent a considerably heavy computational load. A huge amount of data needs to be processed under strict timing and power requirements. Conventional architectures usually fail to adhere to the specifications in many application fields, especially when autonomous vision-enabled devices are to be implemented, like in lightweight UAVs, robotics or wireless sensor networks. A bioinspired architectural approach can be employed consisting of a hierarchical division of the processing chain, conveying the highest computational demand to the focal plane. There, distributed processing elements, concurrent with the photosensitive devices, influence the image capture and generate a pre-processed representation of the scene where only the information of interest for subsequent stages remains. These focal-plane operators are implemented by analog building blocks, which may individually be a little imprecise, but as a whole render the appropriate image processing very efficiently. As a proof of concept, we have developed a 176x144-pixel smart CMOS imager that delivers lighter but enriched representations of the scene. Each pixel of the array contains a photosensor and some switches and weighted paths allowing reconfigurable resolution and spatial filtering. An energy-based image representation is also supported. These functionalities greatly simplify the operation of the subsequent digital processor implementing the high level logic of the vision algorithm. The resulting figures, 5.6m W@30fps, permit the integration of the smart image sensor with a wireless interface module (Imote2 from Memsic Corp.) for the development of vision-enabled WSN applications.Junta de Andalucía 2006-TIC-2352Ministerio de Ciencia e Innovación TEC 2009-11812Office of Naval Research (USA) N00014111031

    City Data Fusion: Sensor Data Fusion in the Internet of Things

    Full text link
    Internet of Things (IoT) has gained substantial attention recently and play a significant role in smart city application deployments. A number of such smart city applications depend on sensor fusion capabilities in the cloud from diverse data sources. We introduce the concept of IoT and present in detail ten different parameters that govern our sensor data fusion evaluation framework. We then evaluate the current state-of-the art in sensor data fusion against our sensor data fusion framework. Our main goal is to examine and survey different sensor data fusion research efforts based on our evaluation framework. The major open research issues related to sensor data fusion are also presented.Comment: Accepted to be published in International Journal of Distributed Systems and Technologies (IJDST), 201

    Field test of multi-hop image sensing network prototype on a city-wide scale

    Get PDF
    Open Access funded by Chongqing University of Posts and Telecommuniocations Under a Creative Commons license, https://creativecommons.org/licenses/by-nc-nd/4.0/Wireless multimedia sensor network drastically stretches the horizon of traditional monitoring and surveillance systems, of which most existing research have utilised Zigbee or WiFi as the communication technology. Both technologies use ultra high frequencies (mainly 2.4 GHz) and suffer from relatively short transmission range (i.e. 100 m line-of-sight). The objective of this paper is to assess the feasibility and potential of transmitting image information using RF modules with lower frequencies (e.g. 433 MHz) in order to achieve a larger scale deployment such as a city scenario. Arduino platform is used for its low cost and simplicity. The details of hardware properties are elaborated in the article, followed by an investigation of optimum configurations for the system. Upon an initial range testing outcome of over 2000 m line-of-sight transmission distance, the prototype network has been installed in a real life city plot for further examination of performance. A range of suitable applications has been proposed along with suggestions for future research.Peer reviewe

    A sub-mW IoT-endnode for always-on visual monitoring and smart triggering

    Full text link
    This work presents a fully-programmable Internet of Things (IoT) visual sensing node that targets sub-mW power consumption in always-on monitoring scenarios. The system features a spatial-contrast 128x64128\mathrm{x}64 binary pixel imager with focal-plane processing. The sensor, when working at its lowest power mode (10μW10\mu W at 10 fps), provides as output the number of changed pixels. Based on this information, a dedicated camera interface, implemented on a low-power FPGA, wakes up an ultra-low-power parallel processing unit to extract context-aware visual information. We evaluate the smart sensor on three always-on visual triggering application scenarios. Triggering accuracy comparable to RGB image sensors is achieved at nominal lighting conditions, while consuming an average power between 193μW193\mu W and 277μW277\mu W, depending on context activity. The digital sub-system is extremely flexible, thanks to a fully-programmable digital signal processing engine, but still achieves 19x lower power consumption compared to MCU-based cameras with significantly lower on-board computing capabilities.Comment: 11 pages, 9 figures, submitteted to IEEE IoT Journa
    corecore