13 research outputs found

    Experimental evaluation of a video streaming system for Wireless Multimedia Sensor Networks

    Get PDF
    Wireless Multimedia Sensor Networks (WMSNs) are recently emerging as an extension to traditional scalar wireless sensor networks, with the distinctive feature of supporting the acquisition and delivery of multimedia content such as audio, images and video. In this paper, a complete framework is proposed and developed for streaming video flows in WMSNs. Such framework is designed in a cross-layer fashion with three main building blocks: (i) a hybrid DPCM/DCT encoder; (ii) a congestion control mechanism and (iii) a selective priority automatic request mechanism at the MAC layer. The system has been implemented on the IntelMote2 platform operated by TinyOS and thoroughly evaluated through testbed experiments on multi-hop WMSNs. The source code of the whole system is publicly available to enable reproducible research. © 2011 IEEE

    A visual sensor network for object recognition: Testbed realization

    Get PDF
    This work describes the implementation of an object recognition service on top of energy and resource-constrained hardware. A complete pipeline for object recognition based on the BRISK visual features is implemented on Intel Imote2 sensor devices. The reference implementation is used to assess the performance of the object recognition pipeline in terms of processing time and recognition accuracy

    Compress-then-analyze vs. analyze-then-compress: Two paradigms for image analysis in visual sensor networks

    Get PDF
    We compare two paradigms for image analysis in vi- sual sensor networks (VSN). In the compress-then-analyze (CTA) paradigm, images acquired from camera nodes are compressed and sent to a central controller for further analysis. Conversely, in the analyze-then-compress (ATC) approach, camera nodes perform visual feature extraction and transmit a compressed version of these features to a central controller. We focus on state-of-the-art binary features which are particularly suitable for resource-constrained VSNs, and we show that the ”winning” paradigm depends primarily on the network conditions. Indeed, while the ATC approach might be the only possible way to perform analysis at low available bitrates, the CTA approach reaches the best results when the available bandwidth enables the transmission of high-quality images

    Distributed object recognition in Visual Sensor Networks

    Get PDF
    This work focuses on Visual Sensor Networks (VSNs) which perform visual analysis tasks such as object recognition. There, the goal is to find the image in a reference database which is the closest match to the image captured by camera sensor nodes. Recognition is performed by relying on visual features extracted from the acquired image, which are matched against a database of labeled features in order to find the closest image match. The matching functionalities are often implemented at a central controller outside the VSN. In contrast, we study the performance trade-offs involved in distributing the matching functionalities inside the VSN by letting sensor nodes performing parts of the matching process. We propose an optimization framework to optimally distribute the matching task to in-network sensor nodes with the goal of minimizing the overall completion time of the recognition task. The proposed optimization framework is then used to assess the performance of distributed matching, comparing it to a traditional, centralized approach in realistic VSN scenarios

    Compress-then-analyze vs. analyze-then-compress: Two paradigms for image analysis in visual sensor networks

    Full text link

    Energy Consumption Of Visual Sensor Networks: Impact Of Spatio-Temporal Coverage

    Get PDF
    Wireless visual sensor networks (VSNs) are expected to play a major role in future IEEE 802.15.4 personal area networks (PAN) under recently-established collision-free medium access control (MAC) protocols, such as the IEEE 802.15.4e-2012 MAC. In such environments, the VSN energy consumption is affected by the number of camera sensors deployed (spatial coverage), as well as the number of captured video frames out of which each node processes and transmits data (temporal coverage). In this paper, we explore this aspect for uniformly-formed VSNs, i.e., networks comprising identical wireless visual sensor nodes connected to a collection node via a balanced cluster-tree topology, with each node producing independent identically-distributed bitstream sizes after processing the video frames captured within each network activation interval. We derive analytic results for the energy-optimal spatio-temporal coverage parameters of such VSNs under a-priori known bounds for the number of frames to process per sensor and the number of nodes to deploy within each tier of the VSN. Our results are parametric to the probability density function characterizing the bitstream size produced by each node and the energy consumption rates of the system of interest. Experimental results reveal that our analytic results are always within 7% of the energy consumption measurements for a wide range of settings. In addition, results obtained via a multimedia subsystem show that the optimal spatio-temporal settings derived by the proposed framework allow for substantial reduction of energy consumption in comparison to ad-hoc settings. As such, our analytic modeling is useful for early-stage studies of possible VSN deployments under collision-free MAC protocols prior to costly and time-consuming experiments in the field.Comment: to appear in IEEE Transactions on Circuits and Systems for Video Technology, 201

    Error Resilient Video Streaming with BCH Code Protection in Wireless Sensor Networks

    Get PDF
    Video streaming in Wireless Sensor Networks (WSNs) is a promising and challenging application for enabling high-value services. In such a context, the reduced amount ofavailable bandwidth, as well as the low-computational power available for acquiring and processing video frames, imposes the transmission of low resolution images at a low frame rate. Considering the aforementioned limitations, the amount of information carried by each video frame must be considered of utmost importance and preserved, as much as possible, against network losses that could introduce possible artifacts in the reconstructed dynamics of the scene.In this paper we first evaluate the impact of the bit error rate on the quality of the received video stream in a real scenario, then we propose a forward error correction technique based on the use of BCH codes with the aim of preserving the video quality. The proposed technique, against already proposed techniques in the WSN research field, has been specially designed to maintain a full back-compatibility with the IEEE802.15.4 standard in order to create a suitable solution aiming at accomplishing the Internet of Things (IoT) vision. Performance results evaluated in terms of Peak Signal-to-Noise Ratio (PSNR) show that the proposed solution reaches a PSNR improvement of 4.16 dB with respect to an unprotected transmission, while requiring an additional overhead equal to 22.51% in number of transmitted bits, and minimal impact on frame rate reduction and energy consumption. When higher protection levels have been imposed, bigger PSNR values have been experienced at the cost of an increased additional overhead, lower frame rates, and bigger energy consumption values

    Energy consumption of visual sensor networks: impact of spatio-temporal coverage

    Get PDF
    Wireless visual sensor networks (VSNs) are expected to play a major role in future IEEE 802.15.4 personal area networks (PANs) under recently established collision-free medium access control (MAC) protocols, such as the IEEE 802.15.4e-2012 MAC. In such environments, the VSN energy consumption is affected by a number of camera sensors deployed (spatial coverage), as well as a number of captured video frames of which each node processes and transmits data (temporal coverage). In this paper we explore this aspect for uniformly formed VSNs, that is, networks comprising identical wireless visual sensor nodes connected to a collection node via a balanced cluster-tree topology, with each node producing independent identically distributed bitstream sizes after processing the video frames captured within each network activation interval. We derive analytic results for the energy-optimal spatiooral coverage parameters of such VSNs under a priori known bounds for the number of frames to process per sensor and the number of nodes to deploy within each tier of the VSN. Our results are parametric to the probability density function characterizing the bitstream size produced by each node and the energy consumption rates of the system of interest. Experimental results are derived from a deployment of TelosB motes and reveal that our analytic results are always within 7%of the energy consumption measurements for a wide range of settings. In addition, results obtained via motion JPEG encoding and feature extraction on a multimedia subsystem (BeagleBone Linux Computer) show that the optimal spatiooral settings derived by our framework allow for substantial reduction of energy consumption in comparison with ad hoc settings
    corecore