Semantic communication has gained significant attention from researchers as a
promising technique to replace conventional communication in the next
generation of communication systems, primarily due to its ability to reduce
communication costs. However, little literature has studied its effectiveness
in multi-user scenarios, particularly when there are variations in the model
architectures used by users and their computing capacities. To address this
issue, we explore a semantic communication system that caters to multiple users
with different model architectures by using a multi-purpose transmitter at the
base station (BS). Specifically, the BS in the proposed framework employs
semantic and channel encoders to encode the image for transmission, while the
receiver utilizes its local channel and semantic decoder to reconstruct the
original image. Our joint source-channel encoder at the BS can effectively
extract and compress semantic features for specific users by considering the
signal-to-noise ratio (SNR) and computing capacity of the user. Based on the
network status, the joint source-channel encoder at the BS can adaptively
adjust the length of the transmitted signal. A longer signal ensures more
information for high-quality image reconstruction for the user, while a shorter
signal helps avoid network congestion. In addition, we propose a hybrid loss
function for training, which enhances the perceptual details of reconstructed
images. Finally, we conduct a series of extensive evaluations and ablation
studies to validate the effectiveness of the proposed system.Comment: 14 pages, 10 figure