286 research outputs found

    Adaptive Semantic Communications: Overfitting the Source and Channel for Profit

    Full text link
    Most semantic communication systems leverage deep learning models to provide end-to-end transmission performance surpassing the established source and channel coding approaches. While, so far, research has mainly focused on architecture and model improvements, but such a model trained over a full dataset and ergodic channel responses is unlikely to be optimal for every test instance. Due to limitations on the model capacity and imperfect optimization and generalization, such learned models will be suboptimal especially when the testing data distribution or channel response is different from that in the training phase, as is likely to be the case in practice. To tackle this, in this paper, we propose a novel semantic communication paradigm by leveraging the deep learning model's overfitting property. Our model can for instance be updated after deployment, which can further lead to substantial gains in terms of the transmission rate-distortion (RD) performance. This new system is named adaptive semantic communication (ASC). In our ASC system, the ingredients of wireless transmitted stream include both the semantic representations of source data and the adapted decoder model parameters. Specifically, we take the overfitting concept to the extreme, proposing a series of ingenious methods to adapt the semantic codec or representations to an individual data or channel state instance. The whole ASC system design is formulated as an optimization problem whose goal is to minimize the loss function that is a tripartite tradeoff among the data rate, model rate, and distortion terms. The experiments (including user study) verify the effectiveness and efficiency of our ASC system. Notably, the substantial gain of our overfitted coding paradigm can catalyze semantic communication upgrading to a new era

    Learning sensor-agent communication with variable quantizations

    Get PDF
    In this work the possibility of training a remote (deep) reinforcement learning system was studied. The thesis focuses on the problem of learning to communicate relevant information from a sensor to a reinforcement learning agent. Different quantization strategies were tested in order to balance a trade-off between the effectiveness of the message communicated and the limited communication rate constraint.In this work the possibility of training a remote (deep) reinforcement learning system was studied. The thesis focuses on the problem of learning to communicate relevant information from a sensor to a reinforcement learning agent. Different quantization strategies were tested in order to balance a trade-off between the effectiveness of the message communicated and the limited communication rate constraint

    Oblivious data hiding : a practical approach

    Get PDF
    This dissertation presents an in-depth study of oblivious data hiding with the emphasis on quantization based schemes. Three main issues are specifically addressed: 1. Theoretical and practical aspects of embedder-detector design. 2. Performance evaluation, and analysis of performance vs. complexity tradeoffs. 3. Some application specific implementations. A communications framework based on channel adaptive encoding and channel independent decoding is proposed and interpreted in terms of oblivious data hiding problem. The duality between the suggested encoding-decoding scheme and practical embedding-detection schemes are examined. With this perspective, a formal treatment of the processing employed in quantization based hiding methods is presented. In accordance with these results, the key aspects of embedder-detector design problem for practical methods are laid out, and various embedding-detection schemes are compared in terms of probability of error, normalized correlation, and hiding rate performance merits assuming AWGN attack scenarios and using mean squared error distortion measure. The performance-complexity tradeoffs available for large and small embedding signal size (availability of high bandwidth and limitation of low bandwidth) cases are examined and some novel insights are offered. A new codeword generation scheme is proposed to enhance the performance of low-bandwidth applications. Embeddingdetection schemes are devised for watermarking application of data hiding, where robustness against the attacks is the main concern rather than the hiding rate or payload. In particular, cropping-resampling and lossy compression types of noninvertible attacks are considered in this dissertation work

    Interleaving Channel Estimation and Limited Feedback for Point-to-Point Systems with a Large Number of Transmit Antennas

    Get PDF
    We introduce and investigate the opportunities of multi-antenna communication schemes whose training and feedback stages are interleaved and mutually interacting. Specifically, unlike the traditional schemes where the transmitter first trains all of its antennas at once and then receives a single feedback message, we consider a scenario where the transmitter instead trains its antennas one by one and receives feedback information immediately after training each one of its antennas. The feedback message may ask the transmitter to train another antenna; or, it may terminate the feedback/training phase and provide the quantized codeword (e.g., a beamforming vector) to be utilized for data transmission. As a specific application, we consider a multiple-input single-output system with tt transmit antennas, a short-term power constraint PP, and target data rate ρ\rho. We show that for any tt, the same outage probability as a system with perfect transmitter and receiver channel state information can be achieved with a feedback rate of R1R_1 bits per channel state and via training R2R_2 transmit antennas on average, where R1R_1 and R2R_2 are independent of tt, and depend only on ρ\rho and PP. In addition, we design variable-rate quantizers for channel coefficients to further minimize the feedback rate of our scheme.Comment: To appear in IEEE Transactions on Wireless Communication

    Distortion Outage Minimization in Rayleigh Fading Using Limited Feedback

    Get PDF
    In this paper we investigate the problem of distortion outage minimization in a clustered sensor network where sensors within each cluster send their noisy measurements of a random Gaussian source to their respective clusterheads (CH) using analog forwarding and a non-orthogonal multi-access scheme under the assumption of perfect distributed beamforming. The CHs then amplify and forward their measurements to a remote fusion center over orthogonal Rayleigh distributed block-fading channels. Due to fading, the distortion between the true value of the random source and its reconstructed estimate at the fusion center becomes a random process. Motivated by delay-limited applications, we seek to minimize the probability that the distortion exceeds a certain threshold (called the "distortion outage" probability) by optimally allocating transmit powers to the CHs. In general, the outage minimizing optimal power allocation for the CH transmitters requires full instantaneous channel state information (CSI) at the transmitters, which is difficult to obtain in practice. The novelty of this paper lies in designing locally optimal and sub-optimal power allocation algorithms which are simple to implement, using limited channel feedback where the fusion center broadcasts only a few bits of feedback to the CHs. Numerical results illustrate that a few bits of feedback provide significant improvement over no CSI and only 6-8 bits of feedback result in outages that are reasonably close to the full CSI performance for a 6-cluster sensor network. We also present results using a simultaneous perturbation stochastic approximation (SPSA) based optimization algorithm that provides further improvements in outage performance but at the cost of a much greater computational complexity

    Sparse representation based hyperspectral image compression and classification

    Get PDF
    Abstract This thesis presents a research work on applying sparse representation to lossy hyperspectral image compression and hyperspectral image classification. The proposed lossy hyperspectral image compression framework introduces two types of dictionaries distinguished by the terms sparse representation spectral dictionary (SRSD) and multi-scale spectral dictionary (MSSD), respectively. The former is learnt in the spectral domain to exploit the spectral correlations, and the latter in wavelet multi-scale spectral domain to exploit both spatial and spectral correlations in hyperspectral images. To alleviate the computational demand of dictionary learning, either a base dictionary trained offline or an update of the base dictionary is employed in the compression framework. The proposed compression method is evaluated in terms of different objective metrics, and compared to selected state-of-the-art hyperspectral image compression schemes, including JPEG 2000. The numerical results demonstrate the effectiveness and competitiveness of both SRSD and MSSD approaches. For the proposed hyperspectral image classification method, we utilize the sparse coefficients for training support vector machine (SVM) and k-nearest neighbour (kNN) classifiers. In particular, the discriminative character of the sparse coefficients is enhanced by incorporating contextual information using local mean filters. The classification performance is evaluated and compared to a number of similar or representative methods. The results show that our approach could outperform other approaches based on SVM or sparse representation. This thesis makes the following contributions. It provides a relatively thorough investigation of applying sparse representation to lossy hyperspectral image compression. Specifically, it reveals the effectiveness of sparse representation for the exploitation of spectral correlations in hyperspectral images. In addition, we have shown that the discriminative character of sparse coefficients can lead to superior performance in hyperspectral image classification.EM201

    Introduction to vector quantization and its applications for numerics*

    Get PDF
    Proceedings of CEMRACS 2013 - Modelling and simulation of complex systems: stochastic and deterministic approaches. : T. Lelièvre et al. EditorsInternational audienceWe present an introductory survey to optimal vector quantization and its first applications to Numerical Probability and, to a lesser extent to Information Theory and Data Mining. Both theoretical results on the quantization rate of a random vector taking values in ℝd (equipped with the canonical Euclidean norm) and the learning procedures that allow to design optimal quantizers (CLVQ and Lloyd’s procedures) are presented. We also introduce and investigate the more recent notion of greedy quantization which may be seen as a sequential optimal quantization. A rate optimal result is established. A brief comparison with Quasi-Monte Carlo method is also carried out
    corecore