6 research outputs found

    The Dispersion of the Gauss-Markov Source

    Get PDF
    The Gauss-Markov source produces U_i = aU_(i–1) + Z_i for i ≥ 1, where U_0 = 0, |a| 0, and we show that the dispersion has a reverse waterfilling representation. This is the first finite blocklength result for lossy compression of sources with memory. We prove that the finite blocklength rate-distortion function R(n; d; ε) approaches the rate-distortion function R(d) as R(n; d; ε) = R(d)+ √ V(d)/n Q–1(ε)+o(1√n), where V (d) is the dispersion, ε ε 2 (0; 1) is the excess-distortion probability, and Q^(-1) is the inverse Q-function. We give a reverse waterfilling integral representation for the dispersion V (d), which parallels that of the rate-distortion functions for Gaussian processes. Remarkably, for all 0 < d ≥ σ^2 (1+|σ|)^2, R(n; d; ε) of the Gauss-Markov source coincides with that of Z_i, the i.i.d. Gaussian noise driving the process, up to the second-order term. Among novel technical tools developed in this paper is a sharp approximation of the eigenvalues of the covariance matrix of n samples of the Gauss-Markov source, and a construction of a typical set using the maximum likelihood estimate of the parameter a based on n observations

    The Dispersion of the Gauss-Markov Source

    Get PDF
    The Gauss-Markov source produces U_i=aU_(i-1)+ Z_i for i ≥ 1, where U_0 = 0, |a| 0, and we show that the dispersion has a reverse waterfilling representation. This is the first finite blocklength result for lossy compression of sources with memory. We prove that the finite blocklength rate-distortion function R(n, d, ε) approaches the rate-distortion function R(d) as R(n, d, ε) = R(d)+√{[V(d)/n]}Q^(-1)(ε)+o([1/(√n)]), where V(d) is the dispersion, ε ∈ (0,1) is the excess-distortion probability, and Q^(-1) is the inverse of the Q-function. We give a reverse waterfilling integral representation for the dispersion V (d), which parallels that of the rate-distortion functions for Gaussian processes. Remarkably, for all 0 <; d ≤ σ2/(1+|a|)^2 ,R(n, d, c) of the Gauss-Markov source coincides with that of Zi, the i.i.d. Gaussian noise driving the process, up to the second-order term. Among novel technical tools developed in this paper is a sharp approximation of the eigenvalues of the covariance matrix of n samples of the Gauss-Markov source, and a construction of a typical set using the maximum likelihood estimate of the parameter a based on n observations

    The Dispersion of the Gauss-Markov Source

    Get PDF
    The Gauss-Markov source produces U_i = aU_(i–1) + Z_i for i ≥ 1, where U_0 = 0, |a| 0, and we show that the dispersion has a reverse waterfilling representation. This is the first finite blocklength result for lossy compression of sources with memory. We prove that the finite blocklength rate-distortion function R(n; d; ε) approaches the rate-distortion function R(d) as R(n; d; ε) = R(d)+ √ V(d)/n Q–1(ε)+o(1√n), where V (d) is the dispersion, ε ε 2 (0; 1) is the excess-distortion probability, and Q^(-1) is the inverse Q-function. We give a reverse waterfilling integral representation for the dispersion V (d), which parallels that of the rate-distortion functions for Gaussian processes. Remarkably, for all 0 < d ≥ σ^2 (1+|σ|)^2, R(n; d; ε) of the Gauss-Markov source coincides with that of Z_i, the i.i.d. Gaussian noise driving the process, up to the second-order term. Among novel technical tools developed in this paper is a sharp approximation of the eigenvalues of the covariance matrix of n samples of the Gauss-Markov source, and a construction of a typical set using the maximum likelihood estimate of the parameter a based on n observations

    Side information aware source and channel coding in wireless networks

    Get PDF
    Signals in communication networks exhibit significant correlation, which can stem from the physical nature of the underlying sources, or can be created within the system. Current layered network architectures, in which, based on Shannon’s separation theorem, data is compressed and transmitted over independent bit-pipes, are in general not able to exploit such correlation efficiently. Moreover, this strictly layered architecture was developed for wired networks and ignore the broadcast and highly dynamic nature of the wireless medium, creating a bottleneck in the wireless network design. Technologies that exploit correlated information and go beyond the layered network architecture can become a key feature of future wireless networks, as information theory promises significant gains. In this thesis, we study from an information theoretic perspective, three distinct, yet fundamental, problems involving the availability of correlated information in wireless networks and develop novel communication techniques to exploit it efficiently. We first look at two joint source-channel coding problems involving the lossy transmission of Gaussian sources in a multi-terminal and a time-varying setting in which correlated side information is present in the network. In these two problems, the optimality of Shannon’s separation breaks down and separate source and channel coding is shown to perform poorly compared to the proposed joint source-channel coding designs, which are shown to achieve the optimal performance in some setups. Then, we characterize the capacity of a class of orthogonal relay channels in the presence of channel side information at the destination, and show that joint decoding and compression of the received signal at the relay is required to optimally exploit the available side information. Our results in these three different scenarios emphasize the benefits of exploiting correlated side information at the destination when designing a communication system, even though the nature of the side information and the performance measure in the three scenarios are quite different.Open Acces

    Sichere Kommunikation über Abhörkanäle mit mehreren Empfängern und aktiven Störsendern

    Get PDF
    We derive a state of the art strong secrecy coding scheme for the multi-receiver wiretap channel under the joint and individual secrecy constraints. we show that individual secrecy can utilize the concept of mutual trust to achieve a larger capacity region compared to the joint one. Further, we derive a full characterization for the list secrecy capacity of arbitrarily varying wiretap channels and establish some interesting results for the continuity and additivity behaviour of the capacity.Für den Abhörkanal mit mehreren Empfängern wird ein Kodierungsschema hergeleitet unter dem gemeinsamen als auch individuellem Sicherheitskriterium. Das individuelle Kriterium basiert auf dem Konzept des gegenseitigen Vertrauens, um eine größere Kapazitätsregion zu erreichen. Weiterhin wird eine vollständige Charakterisierung der Sicherheitskapazität für den beliebig variierenden Kanals aufgestellt, sowie Eigenschaften bezüglich der Kontinuität und des Additivitätsverhalten bewiesen
    corecore