Deep learning based joint source-channel coding (JSCC) has demonstrated
significant advancements in data reconstruction compared to separate
source-channel coding (SSCC). This superiority arises from the suboptimality of
SSCC when dealing with finite block-length data. Moreover, SSCC falls short in
reconstructing data in a multi-user and/or multi-resolution fashion, as it only
tries to satisfy the worst channel and/or the highest quality data. To overcome
these limitations, we propose a novel deep learning multi-resolution JSCC
framework inspired by the concept of multi-task learning (MTL). This proposed
framework excels at encoding data for different resolutions through
hierarchical layers and effectively decodes it by leveraging both current and
past layers of encoded data. Moreover, this framework holds great potential for
semantic communication, where the objective extends beyond data reconstruction
to preserving specific semantic attributes throughout the communication
process. These semantic features could be crucial elements such as class
labels, essential for classification tasks, or other key attributes that
require preservation. Within this framework, each level of encoded data can be
carefully designed to retain specific data semantics. As a result, the
precision of a semantic classifier can be progressively enhanced across
successive layers, emphasizing the preservation of targeted semantics
throughout the encoding and decoding stages. We conduct experiments on MNIST
and CIFAR10 dataset. The experiment with both datasets illustrates that our
proposed method is capable of surpassing the SSCC method in reconstructing data
with different resolutions, enabling the extraction of semantic features with
heightened confidence in successive layers. This capability is particularly
advantageous for prioritizing and preserving more crucial semantic features
within the datasets