2,610 research outputs found

    Energy Consumption Of Visual Sensor Networks: Impact Of Spatio-Temporal Coverage

    Get PDF
    Wireless visual sensor networks (VSNs) are expected to play a major role in future IEEE 802.15.4 personal area networks (PAN) under recently-established collision-free medium access control (MAC) protocols, such as the IEEE 802.15.4e-2012 MAC. In such environments, the VSN energy consumption is affected by the number of camera sensors deployed (spatial coverage), as well as the number of captured video frames out of which each node processes and transmits data (temporal coverage). In this paper, we explore this aspect for uniformly-formed VSNs, i.e., networks comprising identical wireless visual sensor nodes connected to a collection node via a balanced cluster-tree topology, with each node producing independent identically-distributed bitstream sizes after processing the video frames captured within each network activation interval. We derive analytic results for the energy-optimal spatio-temporal coverage parameters of such VSNs under a-priori known bounds for the number of frames to process per sensor and the number of nodes to deploy within each tier of the VSN. Our results are parametric to the probability density function characterizing the bitstream size produced by each node and the energy consumption rates of the system of interest. Experimental results reveal that our analytic results are always within 7% of the energy consumption measurements for a wide range of settings. In addition, results obtained via a multimedia subsystem show that the optimal spatio-temporal settings derived by the proposed framework allow for substantial reduction of energy consumption in comparison to ad-hoc settings. As such, our analytic modeling is useful for early-stage studies of possible VSN deployments under collision-free MAC protocols prior to costly and time-consuming experiments in the field.Comment: to appear in IEEE Transactions on Circuits and Systems for Video Technology, 201

    Enabling Quality-Driven Scalable Video Transmission over Multi-User NOMA System

    Full text link
    Recently, non-orthogonal multiple access (NOMA) has been proposed to achieve higher spectral efficiency over conventional orthogonal multiple access. Although it has the potential to meet increasing demands of video services, it is still challenging to provide high performance video streaming. In this research, we investigate, for the first time, a multi-user NOMA system design for video transmission. Various NOMA systems have been proposed for data transmission in terms of throughput or reliability. However, the perceived quality, or the quality-of-experience of users, is more critical for video transmission. Based on this observation, we design a quality-driven scalable video transmission framework with cross-layer support for multi-user NOMA. To enable low complexity multi-user NOMA operations, a novel user grouping strategy is proposed. The key features in the proposed framework include the integration of the quality model for encoded video with the physical layer model for NOMA transmission, and the formulation of multi-user NOMA-based video transmission as a quality-driven power allocation problem. As the problem is non-concave, a global optimal algorithm based on the hidden monotonic property and a suboptimal algorithm with polynomial time complexity are developed. Simulation results show that the proposed multi-user NOMA system outperforms existing schemes in various video delivery scenarios.Comment: 9 pages, 6 figures. This paper has already been accepted by IEEE INFOCOM 201

    RBF-Based QP Estimation Model for VBR Control in H.264/SVC

    Get PDF
    In this paper we propose a novel variable bit rate (VBR) controller for real-time H.264/scalable video coding (SVC) applications. The proposed VBR controller relies on the fact that consecutive pictures within the same scene often exhibit similar degrees of complexity, and consequently should be encoded using similar quantization parameter (QP) values for the sake of quality consistency. In oder to prevent unnecessary QP fluctuations, the proposed VBR controller allows for just an incremental variation of QP with respect to that of the previous picture, focusing on the design of an effective method for estimating this QP variation. The implementation in H.264/SVC requires to locate a rate controller at each dependency layer (spatial or coarse grain scalability). In particular, the QP increment estimation at each layer is computed by means of a radial basis function (RBF) network that is specially designed for this purpose. Furthermore, the RBF network design process was conceived to provide an effective solution for a wide range of practical real-time VBR applications for scalable video content delivery. In order to assess the proposed VBR controller, two real-time application scenarios were simulated: mobile live streaming and IPTV broadcast. It was compared to constant QP encoding and a recently proposed constant bit rate (CBR) controller for H.264/SVC. The experimental results show that the proposed method achieves remarkably consistent quality, outperforming the reference CBR controller in the two scenarios for all the spatio-temporal resolutions considered.Proyecto CCG10-UC3M/TIC-5570 de la Comunidad Autónoma de Madrid y Universidad Carlos III de MadridPublicad

    Task-Oriented Communication for Edge Video Analytics

    Full text link
    With the development of artificial intelligence (AI) techniques and the increasing popularity of camera-equipped devices, many edge video analytics applications are emerging, calling for the deployment of computation-intensive AI models at the network edge. Edge inference is a promising solution to move the computation-intensive workloads from low-end devices to a powerful edge server for video analytics, but the device-server communications will remain a bottleneck due to the limited bandwidth. This paper proposes a task-oriented communication framework for edge video analytics, where multiple devices collect the visual sensory data and transmit the informative features to an edge server for processing. To enable low-latency inference, this framework removes video redundancy in spatial and temporal domains and transmits minimal information that is essential for the downstream task, rather than reconstructing the videos at the edge server. Specifically, it extracts compact task-relevant features based on the deterministic information bottleneck (IB) principle, which characterizes a tradeoff between the informativeness of the features and the communication cost. As the features of consecutive frames are temporally correlated, we propose a temporal entropy model (TEM) to reduce the bitrate by taking the previous features as side information in feature encoding. To further improve the inference performance, we build a spatial-temporal fusion module at the server to integrate features of the current and previous frames for joint inference. Extensive experiments on video analytics tasks evidence that the proposed framework effectively encodes task-relevant information of video data and achieves a better rate-performance tradeoff than existing methods

    In-layer multi-buffer framework for rate-controlled scalable video coding

    Get PDF
    Temporal scalability is supported in scalable video coding (SVC) by means of hierarchical prediction structures, where the higher layers can be ignored for frame rate reduction. Nevertheless, this kind of scalability is not totally exploited by the rate control (RC) algorithms since the hypothetical reference decoder (HRD) requirement is only satisfied for the highest frame rate sub-stream of every dependency (spatial or coarse grain scalability) layer. In this paper we propose a novel RC approach that aims to deliver several HRD-compliant temporal resolutions within a particular dependency layer. Instead of using the common SVC encoder configuration consisting of a dependency layer per each temporal resolution, a compact configuration that does not require additional dependency layers for providing different HRD-compliant temporal resolutions is proposed. Specifically, the proposed framework for rate-controlled SVC uses a set of virtual buffers within a dependency layer so that their levels can be simultaneously controlled for overflow and underflow prevention while minimizing the reconstructed video distortion of the corresponding sub-streams. This in-layer multi-buffer approach has been built on top of a baseline H.264/SVC RC algorithm for variable bit rate applications. The experimental results show that our proposal achieves a good performance in terms of mean quality, quality consistency, and buffer control using a reduced number of layers.This work has been partially supported by the National Grant TEC2011-26807 of the Spanish Ministry of Science and Innovation.Publicad

    Adaptive sensing and optimal power allocation for wireless video sensors with sigma-delta imager

    Get PDF
    We consider optimal power allocation for wireless video sensors (WVSs), including the image sensor subsystem in the system analysis. By assigning a power-rate-distortion (P-R-D) characteristic for the image sensor, we build a comprehensive P-R-D optimization framework for WVSs. For a WVS node operating under a power budget, we propose power allocation among the image sensor, compression, and transmission modules, in order to minimize the distortion of the video reconstructed at the receiver. To demonstrate the proposed optimization method, we establish a P-R-D model for an image sensor based upon a pixel level sigma-delta ( ) image sensor design that allows investigation of the tradeoff between the bit depth of the captured images and spatio-temporal characteristics of the video sequence under the power constraint. The optimization results obtained in this setting confirm that including the image sensor in the system optimization procedure can improve the overall video quality under power constraint and prolong the lifetime of the WVSs. In particular, when the available power budget for a WVS node falls below a threshold, adaptive sensing becomes necessary to ensure that the node communicates useful information about the video content while meeting its power budget.Peer ReviewedPostprint (published version

    Multi-View Video Packet Scheduling

    Full text link
    In multiview applications, multiple cameras acquire the same scene from different viewpoints and generally produce correlated video streams. This results in large amounts of highly redundant data. In order to save resources, it is critical to handle properly this correlation during encoding and transmission of the multiview data. In this work, we propose a correlation-aware packet scheduling algorithm for multi-camera networks, where information from all cameras are transmitted over a bottleneck channel to clients that reconstruct the multiview images. The scheduling algorithm relies on a new rate-distortion model that captures the importance of each view in the scene reconstruction. We propose a problem formulation for the optimization of the packet scheduling policies, which adapt to variations in the scene content. Then, we design a low complexity scheduling algorithm based on a trellis search that selects the subset of candidate packets to be transmitted towards effective multiview reconstruction at clients. Extensive simulation results confirm the gain of our scheduling algorithm when inter-source correlation information is used in the scheduler, compared to scheduling policies with no information about the correlation or non-adaptive scheduling policies. We finally show that increasing the optimization horizon in the packet scheduling algorithm improves the transmission performance, especially in scenarios where the level of correlation rapidly varies with time
    • …
    corecore