422 research outputs found

    Improved Upper Bounds to the Causal Quadratic Rate-Distortion Function for Gaussian Stationary Sources

    Get PDF
    We improve the existing achievable rate regions for causal and for zero-delay source coding of stationary Gaussian sources under an average mean squared error (MSE) distortion measure. To begin with, we find a closed-form expression for the information-theoretic causal rate-distortion function (RDF) under such distortion measure, denoted by Rcit(D)R_{c}^{it}(D), for first-order Gauss-Markov processes. Rc^{it}(D) is a lower bound to the optimal performance theoretically attainable (OPTA) by any causal source code, namely Rc^{op}(D). We show that, for Gaussian sources, the latter can also be upper bounded as Rc^{op}(D)\leq Rc^{it}(D) + 0.5 log_{2}(2\pi e) bits/sample. In order to analyze Rcit(D)R_{c}^{it}(D) for arbitrary zero-mean Gaussian stationary sources, we introduce \bar{Rc^{it}}(D), the information-theoretic causal RDF when the reconstruction error is jointly stationary with the source. Based upon \bar{Rc^{it}}(D), we derive three closed-form upper bounds to the additive rate loss defined as \bar{Rc^{it}}(D) - R(D), where R(D) denotes Shannon's RDF. Two of these bounds are strictly smaller than 0.5 bits/sample at all rates. These bounds differ from one another in their tightness and ease of evaluation; the tighter the bound, the more involved its evaluation. We then show that, for any source spectral density and any positive distortion D\leq \sigma_{x}^{2}, \bar{Rc^{it}}(D) can be realized by an AWGN channel surrounded by a unique set of causal pre-, post-, and feedback filters. We show that finding such filters constitutes a convex optimization problem. In order to solve the latter, we propose an iterative optimization procedure that yields the optimal filters and is guaranteed to converge to \bar{Rc^{it}}(D). Finally, by establishing a connection to feedback quantization we design a causal and a zero-delay coding scheme which, for Gaussian sources, achieves...Comment: 47 pages, revised version submitted to IEEE Trans. Information Theor

    Stabilizing Error Correction Codes for Controlling LTI Systems over Erasure Channels

    Get PDF
    We propose (k,k') stabilizing codes, which is a type of delayless error correction codes that are useful for control over networks with erasures. For each input symbol, k output symbols are generated by the stabilizing code. Receiving any k' of these outputs guarantees stability. Thus, the system to be stabilized is taken into account in the design of the erasure codes. Our focus is on LTI systems, and we construct codes based on independent encodings and multiple descriptions. The theoretical efficiency and performance of the codes are assessed, and their practical performances are demonstrated in a simulation study. There is a significant gain over other delayless codes such as repetition codes.Comment: Accepted and presented at the IEEE 60th Conference on Decision and Control (CDC). arXiv admin note: substantial text overlap with arXiv:2112.1171

    Design and Analysis of LT Codes with Decreasing Ripple Size

    Full text link
    In this paper we propose a new design of LT codes, which decreases the amount of necessary overhead in comparison to existing designs. The design focuses on a parameter of the LT decoding process called the ripple size. This parameter was also a key element in the design proposed in the original work by Luby. Specifically, Luby argued that an LT code should provide a constant ripple size during decoding. In this work we show that the ripple size should decrease during decoding, in order to reduce the necessary overhead. Initially we motivate this claim by analytical results related to the redundancy within an LT code. We then propose a new design procedure, which can provide any desired achievable decreasing ripple size. The new design procedure is evaluated and compared to the current state of the art through simulations. This reveals a significant increase in performance with respect to both average overhead and error probability at any fixed overhead

    Colour shifts: On methodologies in research on the polychromy of Greek and Roman sculpture

    Get PDF
    The article offers a partial overview of methodologies of research on the polychromy of Greek and Roman sculpture. The character of the evidence requires an interdisciplinary approach. This evidence is briefly presented, after which aspects of the actual investigation are dealt with, the section on analytical methods dealing only cursorily with invasive techniques. Attention is drawn to the importance of research based experimental reconstruction of polychrome sculptures. Finally, some interdisciplinary research scenarios are described. The article is based on work done within the framework of the ‘Tracking Colour’ project of the Ny Carlsberg Glyptotek and the Copenhagen Polychromy Network, 2009 – 2013, with the support of the Carlsberg Foundation

    Information Loss in the Human Auditory System

    Full text link
    From the eardrum to the auditory cortex, where acoustic stimuli are decoded, there are several stages of auditory processing and transmission where information may potentially get lost. In this paper, we aim at quantifying the information loss in the human auditory system by using information theoretic tools. To do so, we consider a speech communication model, where words are uttered and sent through a noisy channel, and then received and processed by a human listener. We define a notion of information loss that is related to the human word recognition rate. To assess the word recognition rate of humans, we conduct a closed-vocabulary intelligibility test. We derive upper and lower bounds on the information loss. Simulations reveal that the bounds are tight and we observe that the information loss in the human auditory system increases as the signal to noise ratio (SNR) decreases. Our framework also allows us to study whether humans are optimal in terms of speech perception in a noisy environment. Towards that end, we derive optimal classifiers and compare the human and machine performance in terms of information loss and word recognition rate. We observe a higher information loss and lower word recognition rate for humans compared to the optimal classifiers. In fact, depending on the SNR, the machine classifier may outperform humans by as much as 8 dB. This implies that for the speech-in-stationary-noise setup considered here, the human auditory system is sub-optimal for recognizing noisy words

    Fixed-Rate Zero-Delay Source Coding for Stationary Vector-Valued Gauss-Markov Sources

    Get PDF

    Directed Data-Processing Inequalities for Systems with Feedback

    Get PDF
    We present novel data-processing inequalities relating the mutual information and the directed information in systems with feedback. The internal blocks within such systems are restricted only to be causal mappings, but are allowed to be non-linear, stochastic and time varying. These blocks can for example represent source encoders, decoders or even communication channels. Moreover, the involved signals can be arbitrarily distributed. Our first main result relates mutual and directed informations and can be interpreted as a law of conservation of information flow. Our second main result is a pair of data-processing inequalities (one the conditional version of the other) between nested pairs of random sequences entirely within the closed loop. Our third main result is introducing and characterizing the notion of in-the-loop (ITL) transmission rate for channel coding scenarios in which the messages are internal to the loop. Interestingly, in this case the conventional notions of transmission rate associated with the entropy of the messages and of channel capacity based on maximizing the mutual information between the messages and the output turn out to be inadequate. Instead, as we show, the ITL transmission rate is the unique notion of rate for which a channel code attains zero error probability if and only if such ITL rate does not exceed the corresponding directed information rate from messages to decoded messages. We apply our data-processing inequalities to show that the supremum of achievable (in the usual channel coding sense) ITL transmission rates is upper bounded by the supremum of the directed information rate across the communication channel. Moreover, we present an example in which this upper bound is attained. Finally, ...Comment: Submitted to Entropy. arXiv admin note: substantial text overlap with arXiv:1301.642
    • …
    corecore