13,141 research outputs found

    Distributed Representation of Geometrically Correlated Images with Compressed Linear Measurements

    Get PDF
    This paper addresses the problem of distributed coding of images whose correlation is driven by the motion of objects or positioning of the vision sensors. It concentrates on the problem where images are encoded with compressed linear measurements. We propose a geometry-based correlation model in order to describe the common information in pairs of images. We assume that the constitutive components of natural images can be captured by visual features that undergo local transformations (e.g., translation) in different images. We first identify prominent visual features by computing a sparse approximation of a reference image with a dictionary of geometric basis functions. We then pose a regularized optimization problem to estimate the corresponding features in correlated images given by quantized linear measurements. The estimated features have to comply with the compressed information and to represent consistent transformation between images. The correlation model is given by the relative geometric transformations between corresponding features. We then propose an efficient joint decoding algorithm that estimates the compressed images such that they stay consistent with both the quantized measurements and the correlation model. Experimental results show that the proposed algorithm effectively estimates the correlation between images in multi-view datasets. In addition, the proposed algorithm provides effective decoding performance that compares advantageously to independent coding solutions as well as state-of-the-art distributed coding schemes based on disparity learning

    Distributed Quantization for Compressed Sensing

    Full text link
    We study distributed coding of compressed sensing (CS) measurements using vector quantizer (VQ). We develop a distributed framework for realizing optimized quantizer that enables encoding CS measurements of correlated sparse sources followed by joint decoding at a fusion center. The optimality of VQ encoder-decoder pairs is addressed by minimizing the sum of mean-square errors between the sparse sources and their reconstruction vectors at the fusion center. We derive a lower-bound on the end-to-end performance of the studied distributed system, and propose a practical encoder-decoder design through an iterative algorithm.Comment: 5 Pages, Accepted for presentation in ICASSP 201

    Graded quantization for multiple description coding of compressive measurements

    Get PDF
    Compressed sensing (CS) is an emerging paradigm for acquisition of compressed representations of a sparse signal. Its low complexity is appealing for resource-constrained scenarios like sensor networks. However, such scenarios are often coupled with unreliable communication channels and providing robust transmission of the acquired data to a receiver is an issue. Multiple description coding (MDC) effectively combats channel losses for systems without feedback, thus raising the interest in developing MDC methods explicitly designed for the CS framework, and exploiting its properties. We propose a method called Graded Quantization (CS-GQ) that leverages the democratic property of compressive measurements to effectively implement MDC, and we provide methods to optimize its performance. A novel decoding algorithm based on the alternating directions method of multipliers is derived to reconstruct signals from a limited number of received descriptions. Simulations are performed to assess the performance of CS-GQ against other methods in presence of packet losses. The proposed method is successful at providing robust coding of CS measurements and outperforms other schemes for the considered test metrics

    Rate adaptive binary erasure quantization with dual fountain codes

    Get PDF
    In this contribution, duals of fountain codes are introduced and their use for lossy source compression is investigated. It is shown both theoretically and experimentally that the source coding dual of the binary erasure channel coding problem, binary erasure quantization, is solved at a nearly optimal rate with application of duals of LT and raptor codes by a belief propagation-like algorithm which amounts to a graph pruning procedure. Furthermore, this quantizing scheme is rate adaptive, i.e., its rate can be modified on-the-fly in order to adapt to the source distribution, very much like LT and raptor codes are able to adapt their rate to the erasure probability of a channel
    • …
    corecore