5,853 research outputs found

    On feedback in network source coding

    Get PDF
    We consider source coding over networks with unlimited feedback from the sinks to the sources. We first show examples of networks where the rate region with feedback is a strict superset of that without feedback. Next, we find an achievable region for multiterminal lossy source coding with feedback. Finally, we evaluate this region for the case when one of the sources is fully known at the decoder and use the result to show that this region is a strict superset of the best known achievable region for the problem without feedback

    DRASIC: Distributed Recurrent Autoencoder for Scalable Image Compression

    Full text link
    We propose a new architecture for distributed image compression from a group of distributed data sources. The work is motivated by practical needs of data-driven codec design, low power consumption, robustness, and data privacy. The proposed architecture, which we refer to as Distributed Recurrent Autoencoder for Scalable Image Compression (DRASIC), is able to train distributed encoders and one joint decoder on correlated data sources. Its compression capability is much better than the method of training codecs separately. Meanwhile, the performance of our distributed system with 10 distributed sources is only within 2 dB peak signal-to-noise ratio (PSNR) of the performance of a single codec trained with all data sources. We experiment distributed sources with different correlations and show how our data-driven methodology well matches the Slepian-Wolf Theorem in Distributed Source Coding (DSC). To the best of our knowledge, this is the first data-driven DSC framework for general distributed code design with deep learning
    • …
    corecore