8,705 research outputs found

    Optimal Compression and Transmission Rate Control for Node-Lifetime Maximization

    Get PDF
    We consider a system that is composed of an energy constrained sensor node and a sink node, and devise optimal data compression and transmission policies with an objective to prolong the lifetime of the sensor node. While applying compression before transmission reduces the energy consumption of transmitting the sensed data, blindly applying too much compression may even exceed the cost of transmitting raw data, thereby losing its purpose. Hence, it is important to investigate the trade-off between data compression and transmission energy costs. In this paper, we study the joint optimal compression-transmission design in three scenarios which differ in terms of the available channel information at the sensor node, and cover a wide range of practical situations. We formulate and solve joint optimization problems aiming to maximize the lifetime of the sensor node whilst satisfying specific delay and bit error rate (BER) constraints. Our results show that a jointly optimized compression-transmission policy achieves significantly longer lifetime (90% to 2000%) as compared to optimizing transmission only without compression. Importantly, this performance advantage is most profound when the delay constraint is stringent, which demonstrates its suitability for low latency communication in future wireless networks.Comment: accepted for publication in IEEE Transactions on Wireless Communicaiton

    Mitigation of H.264 and H.265 Video Compression for Reliable PRNU Estimation

    Full text link
    The photo-response non-uniformity (PRNU) is a distinctive image sensor characteristic, and an imaging device inadvertently introduces its sensor's PRNU into all media it captures. Therefore, the PRNU can be regarded as a camera fingerprint and used for source attribution. The imaging pipeline in a camera, however, involves various processing steps that are detrimental to PRNU estimation. In the context of photographic images, these challenges are successfully addressed and the method for estimating a sensor's PRNU pattern is well established. However, various additional challenges related to generation of videos remain largely untackled. With this perspective, this work introduces methods to mitigate disruptive effects of widely deployed H.264 and H.265 video compression standards on PRNU estimation. Our approach involves an intervention in the decoding process to eliminate a filtering procedure applied at the decoder to reduce blockiness. It also utilizes decoding parameters to develop a weighting scheme and adjust the contribution of video frames at the macroblock level to PRNU estimation process. Results obtained on videos captured by 28 cameras show that our approach increases the PRNU matching metric up to more than five times over the conventional estimation method tailored for photos

    Gossip Algorithms for Distributed Signal Processing

    Full text link
    Gossip algorithms are attractive for in-network processing in sensor networks because they do not require any specialized routing, there is no bottleneck or single point of failure, and they are robust to unreliable wireless network conditions. Recently, there has been a surge of activity in the computer science, control, signal processing, and information theory communities, developing faster and more robust gossip algorithms and deriving theoretical performance guarantees. This article presents an overview of recent work in the area. We describe convergence rate results, which are related to the number of transmitted messages and thus the amount of energy consumed in the network for gossiping. We discuss issues related to gossiping over wireless links, including the effects of quantization and noise, and we illustrate the use of gossip algorithms for canonical signal processing tasks including distributed estimation, source localization, and compression.Comment: Submitted to Proceedings of the IEEE, 29 page

    ITOS VHRR on-board data compression study

    Get PDF
    Data compression methods for ITOS VHRR data were studied for a tape recorder record-and playback application. A playback period of 9 minutes was assumed with a nominal 18 minute record period for a 2-to-1 compression ratio. Both analog and digital methods were considered with the conclusion that digital methods should be used. Two system designs were prepared. One is a PCM system and the other is an entropy-coded predictive-quantization, sometimes called entropy-coded DPCM or just DPCM, system. Both systems use data management principles to transmit only the necessary data. Both systems use a medium capacity standard tape recorder from specifications provided by the technical officer. The 10 to the 9th power bit capacity of the recorder is the basic limitation on the compression ratio. Both systems achieve the minimum desired 2 to 1 compression ratio. A slower playback rate can be used with the DPCM system due to a higher compression factor for better link performance at a given CNR in terms of bandwidth utilization and error rate. The report is divided into two parts. The first part summarizes the theoretical conclusions of the second part and presents the system diagrams. The second part is a detailed analysis based upon an empirically derived random process model arrived at from specifications and measured data provided by the technical officer

    Compression of spectral meteorological imagery

    Get PDF
    Data compression is essential to current low-earth-orbit spectral sensors with global coverage, e.g., meteorological sensors. Such sensors routinely produce in excess of 30 Gb of data per orbit (over 4 Mb/s for about 110 min) while typically limited to less than 10 Gb of downlink capacity per orbit (15 minutes at 10 Mb/s). Astro-Space Division develops spaceborne compression systems for compression ratios from as little as three to as much as twenty-to-one for high-fidelity reconstructions. Current hardware production and development at Astro-Space Division focuses on discrete cosine transform (DCT) systems implemented with the GE PFFT chip, a 32x32 2D-DCT engine. Spectral relations in the data are exploited through block mean extraction followed by orthonormal transformation. The transformation produces blocks with spatial correlation that are suitable for further compression with any block-oriented spatial compression system, e.g., Astro-Space Division's Laplacian modeler and analytic encoder of DCT coefficients

    High-resolution distributed sampling of bandlimited fields with low-precision sensors

    Full text link
    The problem of sampling a discrete-time sequence of spatially bandlimited fields with a bounded dynamic range, in a distributed, communication-constrained, processing environment is addressed. A central unit, having access to the data gathered by a dense network of fixed-precision sensors, operating under stringent inter-node communication constraints, is required to reconstruct the field snapshots to maximum accuracy. Both deterministic and stochastic field models are considered. For stochastic fields, results are established in the almost-sure sense. The feasibility of having a flexible tradeoff between the oversampling rate (sensor density) and the analog-to-digital converter (ADC) precision, while achieving an exponential accuracy in the number of bits per Nyquist-interval per snapshot is demonstrated. This exposes an underlying ``conservation of bits'' principle: the bit-budget per Nyquist-interval per snapshot (the rate) can be distributed along the amplitude axis (sensor-precision) and space (sensor density) in an almost arbitrary discrete-valued manner, while retaining the same (exponential) distortion-rate characteristics. Achievable information scaling laws for field reconstruction over a bounded region are also derived: With N one-bit sensors per Nyquist-interval, Θ(logN)\Theta(\log N) Nyquist-intervals, and total network bitrate Rnet=Θ((logN)2)R_{net} = \Theta((\log N)^2) (per-sensor bitrate Θ((logN)/N)\Theta((\log N)/N)), the maximum pointwise distortion goes to zero as D=O((logN)2/N)D = O((\log N)^2/N) or D=O(Rnet2βRnet)D = O(R_{net} 2^{-\beta \sqrt{R_{net}}}). This is shown to be possible with only nearest-neighbor communication, distributed coding, and appropriate interpolation algorithms. For a fixed, nonzero target distortion, the number of fixed-precision sensors and the network rate needed is always finite.Comment: 17 pages, 6 figures; paper withdrawn from IEEE Transactions on Signal Processing and re-submitted to the IEEE Transactions on Information Theor
    corecore