970 research outputs found

    Information Loss in the Human Auditory System

    Full text link
    From the eardrum to the auditory cortex, where acoustic stimuli are decoded, there are several stages of auditory processing and transmission where information may potentially get lost. In this paper, we aim at quantifying the information loss in the human auditory system by using information theoretic tools. To do so, we consider a speech communication model, where words are uttered and sent through a noisy channel, and then received and processed by a human listener. We define a notion of information loss that is related to the human word recognition rate. To assess the word recognition rate of humans, we conduct a closed-vocabulary intelligibility test. We derive upper and lower bounds on the information loss. Simulations reveal that the bounds are tight and we observe that the information loss in the human auditory system increases as the signal to noise ratio (SNR) decreases. Our framework also allows us to study whether humans are optimal in terms of speech perception in a noisy environment. Towards that end, we derive optimal classifiers and compare the human and machine performance in terms of information loss and word recognition rate. We observe a higher information loss and lower word recognition rate for humans compared to the optimal classifiers. In fact, depending on the SNR, the machine classifier may outperform humans by as much as 8 dB. This implies that for the speech-in-stationary-noise setup considered here, the human auditory system is sub-optimal for recognizing noisy words

    Source Coding in Networks with Covariance Distortion Constraints

    Get PDF
    We consider a source coding problem with a network scenario in mind, and formulate it as a remote vector Gaussian Wyner-Ziv problem under covariance matrix distortions. We define a notion of minimum for two positive-definite matrices based on which we derive an explicit formula for the rate-distortion function (RDF). We then study the special cases and applications of this result. We show that two well-studied source coding problems, i.e. remote vector Gaussian Wyner-Ziv problems with mean-squared error and mutual information constraints are in fact special cases of our results. Finally, we apply our results to a joint source coding and denoising problem. We consider a network with a centralized topology and a given weighted sum-rate constraint, where the received signals at the center are to be fused to maximize the output SNR while enforcing no linear distortion. We show that one can design the distortion matrices at the nodes in order to maximize the output SNR at the fusion center. We thereby bridge between denoising and source coding within this setup

    Iterated smoothing for accelerated gradient convex minimization in signal processing

    Get PDF

    IEA EBC Annex 67 Energy Flexible Buildings

    Get PDF

    Real-Time Perceptual Moving-Horizon Multiple-Description Audio Coding

    Get PDF
    A novel scheme for perceptual coding of audio for robust and real-time communication is designed and analyzed. As an alternative to PCM, DPCM, and more general noise-shaping converters, we propose to use psychoacoustically optimized noise-shaping quantizers based on the moving-horizon principle. In moving-horizon quantization, a few samples look-ahead is allowed at the encoder, which makes it possible to better shape the quantization noise and thereby reduce the resulting distortion over what is possible with conventional noise-shaping techniques. It is first shown that significant gains over linear PCM can be obtained without introducing a delay and without requiring postprocessing at the decoder, i.e., the encoded samples can be stored as, e.g., 16-bit linear PCM on CD-ROMs, and played out on standards-compliant CD players. We then show that multiple-description coding can be combined with moving-horizon quantization in order to combat possible erasures on the wireless link without introducing additional delays

    An efficient first-order method for l1 compression of images

    Get PDF

    Multiple-Description l1-Compression

    Get PDF
    • …
    corecore