20 research outputs found

    A Distortion-minimizing Rate Controller for Wireless Multimedia Sensor Networks

    No full text
    Abstract—The availability of inexpensive CMOS cameras and microphones that can ubiquitously capture multimedia content from the environment is fostering the development of Wireless Multimedia Sensor Networks (WMSNs), i.e., distributed systems of wirelessly networked devices that can retrieve video and audio streams, still images, and scalar sensor data. WMSNs require the sensor network paradigm to be re-thought in view of the need for mechanisms to deliver multimedia content with a pre-defined level of quality of service (QoS). A new rate control scheme for WMSNs is introduced in this paper with a two-fold objective: i) maximize the video quality of each individual video stream; ii) maintain fairness in video quality between different video streams. The rate control scheme is based on both analytical and empirical models and consists of a new cross-layer control algorithm that jointly regulates the end-to-end data rate, the video quality, and the strength of the channel coding at the physical layer. The end-to-end data rate is regulated to avoid congestion while maintaining fairness in the domain of video quality rather than data rate. Once the endto-end data rate has been determined, the sender adjusts the video encoder rate and the channel encoder rate based on the overall rate and the current channel quality, with the objective of minimizing the distortion of the received video. Simulations show that the proposed algorithm considerably improves the received video quality without sacrificing fairness. I

    IEEE COMSOC MMTC E-Letter Focused Technology Advances Series A Case for Compressive Video Streaming in Wireless Multimedia Sensor Networks

    No full text
    (WMSN) [1] are self-organizing wireless systems of embedded devices deployed to retrieve, distributively process in real-time, store

    A Tutorial on Encoding and Wireless Transmission of Compressively Sampled Videos

    No full text
    Abstract—Compressed sensing (CS) has emerged as a promising technique to jointly sense and compress sparse signals. One of the most promising applications of CS is compressive imaging. Leveraging the fact that images can be represented as approximately sparse signals in a transformed domain, images can be compressed and sampled simultaneously using low-complexity linear operations. Recently, these techniques have been extended beyond imaging to encode video. Much of the compression in traditional video encoding comes from using motion vectors to take advantage of the temporal correlation between adjacent frames. However, calculating motion vectors is a processingintensive operation that causes significant power consumption. Therefore, any technique appropriate for resource constrained video sensors must exploit temporal correlation through lowcomplexity operations. In this tutorial, we first briefly discuss challenges involved in the transmission of video over a wireless multimedia sensor network (WMSN). We then discuss the different techniques available for applying CS encoding first to images, and then to videos for error-resilient transmission in lossy channels. Existing solutions are examined, and compared in terms of applicability to wireless multimedia sensor networks (WMSNs). Finally, open issues are discussed and future research trends are outlined. Index Terms—Compressed Sensing, Multimedia communication, Wireless sensor networks, Video coding, Energy-ratedistortion

    IEEE ICC 2013- Ad-hoc and Sensor Networking Symposium RA-CVS: Cooperating at Low Power to Stream Compressively Sampled Videos

    No full text
    Abstract—Video streaming applications are becoming increasingly popular as low priced video-enabled mobile devices (such as smart phones) become more common. However, traditional video streaming systems are not designed for mobile devices, and require both high computational complexity at the video sensor and very high channel quality to achieve good performance. Our recently proposed compressive video sensing (CVS) video streaming system is a low complexity, low power compressedsensing-based encoder designed to address these challenges. However, even using CVS, the energy consumption of multimedia sensors is still much higher than that of traditional scalar sensors. In this article, we present a cooperative relay-assisted compressed video sensing (RA-CVS) system that takes advantage of the error resilience of video encoded using CVS to maintain good video quality at the receiver while significantly reducing the required SNR, and therefore the required transmission power at the multimedia sensor node. This system uses the natural error resilience of CS encoded video signals to design a cooperative scheme that directly reduces the mean squared error (MSE) of the reconstructed CS samples representing a video frame, which allows the receiver to correctly reconstruct the video even at very low SNR levels. The proposed system is tested using both simulation and USRP2 testbed evaluation and is shown to outperform traditional cooperative systems in terms of received video quality as a function of channel SNR. I
    corecore