333 research outputs found

    Optimal Identical Binary Quantizer Design for Distributed Estimation

    Full text link
    We consider the design of identical one-bit probabilistic quantizers for distributed estimation in sensor networks. We assume the parameter-range to be finite and known and use the maximum Cram\'er-Rao Lower Bound (CRB) over the parameter-range as our performance metric. We restrict our theoretical analysis to the class of antisymmetric quantizers and determine a set of conditions for which the probabilistic quantizer function is greatly simplified. We identify a broad class of noise distributions, which includes Gaussian noise in the low-SNR regime, for which the often used threshold-quantizer is found to be minimax-optimal. Aided with theoretical results, we formulate an optimization problem to obtain the optimum minimax-CRB quantizer. For a wide range of noise distributions, we demonstrate the superior performance of the new quantizer - particularly in the moderate to high-SNR regime.Comment: 6 pages, 3 figures, This paper has been accepted for publication in IEEE Transactions in Signal Processin

    Model for Estimation of Bounds in Digital Coding of Seabed Images

    Get PDF
    This paper proposes the novel model for estimation of bounds in digital coding of images. Entropy coding of images is exploited to measure the useful information content of the data. The bit rate achieved by reversible compression using the rate-distortion theory approach takes into account the contribution of the observation noise and the intrinsic information of hypothetical noise-free image. Assuming the Laplacian probability density function of the quantizer input signal, SQNR gains are calculated for image predictive coding system with non-adaptive quantizer for white and correlated noise, respectively. The proposed model is evaluated on seabed images. However, model presented in this paper can be applied to any signal with Laplacian distribution

    Gossip Algorithms for Distributed Signal Processing

    Full text link
    Gossip algorithms are attractive for in-network processing in sensor networks because they do not require any specialized routing, there is no bottleneck or single point of failure, and they are robust to unreliable wireless network conditions. Recently, there has been a surge of activity in the computer science, control, signal processing, and information theory communities, developing faster and more robust gossip algorithms and deriving theoretical performance guarantees. This article presents an overview of recent work in the area. We describe convergence rate results, which are related to the number of transmitted messages and thus the amount of energy consumed in the network for gossiping. We discuss issues related to gossiping over wireless links, including the effects of quantization and noise, and we illustrate the use of gossip algorithms for canonical signal processing tasks including distributed estimation, source localization, and compression.Comment: Submitted to Proceedings of the IEEE, 29 page

    Communication-constrained feedback stability and Multi-agent System consensusability in Networked Control Systems

    Get PDF
    With the advances in wireless communication, the topic of Networked Control Systems (NCSs) has become an interesting research subject. Moreover, the advantages they offer convinced companies to implement and use data networks for remote industrial control and process automation. Data networks prove to be very efficient for controlling distributed systems, which would otherwise require complex wiring connections on large or inaccessible areas. In addition, they are easier to maintain and more cost efficient. Unfortunately, stability and performance control is always going to be affected by network and communication issues, such as band-limited channels, quantization errors, sampling, delays, packet dropouts or system architecture. The first part of this research aims to study the effects of both input and output quantization on an NCS. Both input and output quantization errors are going to be modeled as sector bounded multiplicative uncertainties, the main goal being the minimization of the quantization density, while maintaining feedback stability. Modeling quantization errors as uncertainties allows for robust optimal control strategies to be applied in order to study the accepted uncertainty levels, which are directly related to the quantization levels. A new feedback law is proposed that will improve closed-loop system stability by increasing the upper bound of allowed uncertainty, and thus allowing the use of a coarser quantizer. Another aspect of NCS deals with coordination of the independent agents within a Multi-agent System (MAS). This research addresses the consensus problem for a set of discrete-time agents communicating through a network with directed information flow. It examines the combined effect of agent dynamics and network topology on agents\u27 consensusability. Given a particular consensus protocol, a sufficient condition is given for agents to be consensusable. This condition requires the eigenvalues of the digraph modeling the network topology to be outer bounded by a fan-shaped area determined by the Mahler measure of the agents\u27 dynamics matrix

    Collaborative Estimation in Distributed Sensor Networks

    Get PDF
    Networks of smart ultra-portable devices are already indispensable in our lives, augmenting our senses and connecting our lives through real time processing and communication of sensory (e.g., audio, video, location) inputs. Though usually hidden from the user\u27s sight, the engineering of these devices involves fierce tradeoffs between energy availability (battery sizes impact portability) and signal processing / communication capability (which impacts the smartness of the devices). The goal of this dissertation is to provide a fundamental understanding and characterization of these tradeoffs in the context of a sensor network, where the goal is to estimate a common signal by coordinating a multitude of battery-powered sensor nodes. Most of the research so far has been based on two key assumptions -- distributed processing and temporal independence -- that lend analytical tractability to the problem but otherwise are often found lacking in practice. This dissertation introduces novel techniques to relax these assumptions -- leading to vastly efficient energy usage in typical networks (up to 20% savings) and new insights on the quality of inference. For example, the phenomenon of sensor drift is ubiquitous in applications such as air-quality monitoring, oceanography and bridge monitoring, where calibration is often difficult and costly. This dissertation provides an analytical framework linking the state of calibration to the overall uncertainty of the inferred parameters. In distributed estimation, sensor nodes locally process their observed data and send the resulting messages to a sink, which combines the received messages to produce a final estimate of the unknown parameter. In this dissertation, this problem is generalized and called collaborative estimation , where some sensors can potentially have access to the observations from neighboring sensors and use that information to enhance the quality of their messages sent to the sink, while using the same (or lower) energy resources. This is motivated by the fact that inter-sensor communication may be possible if sensors are geographically close. As demonstrated in this dissertation, collaborative estimation is particularly effective in energy-skewed and information-skewed networks, where some nodes may have larger batteries than others and similarly some nodes may be more informative (less noisy) compared to others. Since the node with the largest battery is not necessarily also the most informative, the proposed inter-sensor collaboration provides a natural framework to route the relevant information from low-energy-high-quality nodes to high-energy-low-quality nodes in a manner that enhances the overall power-distortion tradeoff. This dissertation also analyzes how time-correlated measurement noise affects the uncertainties of inferred parameters. Imperfections such as baseline drift in sensors result in a time-correlated additive component in the measurement noise. Though some models of drift have been reported in the literature earlier, none of the studies have considered the effect of drifting sensors on an estimation application. In this dissertation, approximate measures of estimation accuracy (Cramer-Rao bounds) are derived as a function of physical properties of sensors -- namely the drift strength, correlation (Markov) factor and the time-elapsed since last calibration. For stationary drift (Markov factor less than one), it is demonstrated that the first order effect of drift is asymptotically equivalent to scaling the measurement noise by an appropriate factor. When the drift is non-stationary (Markov factor equal to one), it is established that the constant part of a signal can only be estimated inconsistently (with non-zero asymptotic variance). The results help quantify the notions that measurements taken sooner after calibration result in more accurate inference

    Graph Spectral Image Processing

    Full text link
    Recent advent of graph signal processing (GSP) has spurred intensive studies of signals that live naturally on irregular data kernels described by graphs (e.g., social networks, wireless sensor networks). Though a digital image contains pixels that reside on a regularly sampled 2D grid, if one can design an appropriate underlying graph connecting pixels with weights that reflect the image structure, then one can interpret the image (or image patch) as a signal on a graph, and apply GSP tools for processing and analysis of the signal in graph spectral domain. In this article, we overview recent graph spectral techniques in GSP specifically for image / video processing. The topics covered include image compression, image restoration, image filtering and image segmentation

    Side information exploitation, quality control and low complexity implementation for distributed video coding

    Get PDF
    Distributed video coding (DVC) is a new video coding methodology that shifts the highly complex motion search components from the encoder to the decoder, such a video coder would have a great advantage in encoding speed and it is still able to achieve similar rate-distortion performance as the conventional coding solutions. Applications include wireless video sensor networks, mobile video cameras and wireless video surveillance, etc. Although many progresses have been made in DVC over the past ten years, there is still a gap in RD performance between conventional video coding solutions and DVC. The latest development of DVC is still far from standardization and practical use. The key problems remain in the areas such as accurate and efficient side information generation and refinement, quality control between Wyner-Ziv frames and key frames, correlation noise modelling and decoder complexity, etc. Under this context, this thesis proposes solutions to improve the state-of-the-art side information refinement schemes, enable consistent quality control over decoded frames during coding process and implement highly efficient DVC codec. This thesis investigates the impact of reference frames on side information generation and reveals that reference frames have the potential to be better side information than the extensively used interpolated frames. Based on this investigation, we also propose a motion range prediction (MRP) method to exploit reference frames and precisely guide the statistical motion learning process. Extensive simulation results show that choosing reference frames as SI performs competitively, and sometimes even better than interpolated frames. Furthermore, the proposed MRP method is shown to significantly reduce the decoding complexity without degrading any RD performance. To minimize the block artifacts and achieve consistent improvement in both subjective and objective quality of side information, we propose a novel side information synthesis framework working on pixel granularity. We synthesize the SI at pixel level to minimize the block artifacts and adaptively change the correlation noise model according to the new SI. Furthermore, we have fully implemented a state-of-the-art DVC decoder with the proposed framework using serial and parallel processing technologies to identify bottlenecks and areas to further reduce the decoding complexity, which is another major challenge for future practical DVC system deployments. The performance is evaluated based on the latest transform domain DVC codec and compared with different standard codecs. Extensive experimental results show substantial and consistent rate-distortion gains over standard video codecs and significant speedup over serial implementation. In order to bring the state-of-the-art DVC one step closer to practical use, we address the problem of distortion variation introduced by typical rate control algorithms, especially in a variable bit rate environment. Simulation results show that the proposed quality control algorithm is capable to meet user defined target distortion and maintain a rather small variation for sequence with slow motion and performs similar to fixed quantization for fast motion sequence at the cost of some RD performance. Finally, we propose the first implementation of a distributed video encoder on a Texas Instruments TMS320DM6437 digital signal processor. The WZ encoder is efficiently implemented, using rate adaptive low-density-parity-check accumulative (LDPCA) codes, exploiting the hardware features and optimization techniques to improve the overall performance. Implementation results show that the WZ encoder is able to encode at 134M instruction cycles per QCIF frame on a TMS320DM6437 DSP running at 700MHz. This results in encoder speed 29 times faster than non-optimized encoder implementation. We also implemented a highly efficient DVC decoder using both serial and parallel technology based on a PC-HPC (high performance cluster) architecture, where the encoder is running in a general purpose PC and the decoder is running in a multicore HPC. The experimental results show that the parallelized decoder can achieve about 10 times speedup under various bit-rates and GOP sizes compared to the serial implementation and significant RD gains with regards to the state-of-the-art DISCOVER codec

    Implementation issues in source coding

    Get PDF
    An edge preserving image coding scheme which can be operated in both a lossy and a lossless manner was developed. The technique is an extension of the lossless encoding algorithm developed for the Mars observer spectral data. It can also be viewed as a modification of the DPCM algorithm. A packet video simulator was also developed from an existing modified packet network simulator. The coding scheme for this system is a modification of the mixture block coding (MBC) scheme described in the last report. Coding algorithms for packet video were also investigated
    corecore