105 research outputs found

    Entropy Density and Mismatch in High-Rate Scalar Quantization with Rényi Entropy Constraint

    Get PDF
    Properties of scalar quantization with rrth power distortion and constrained R\'enyi entropy of order α(0,1)\alpha\in (0,1) are investigated. For an asymptotically (high-rate) optimal sequence of quantizers, the contribution to the R\'enyi entropy due to source values in a fixed interval is identified in terms of the "entropy density" of the quantizer sequence. This extends results related to the well-known point density concept in optimal fixed-rate quantization. A dual of the entropy density result quantifies the distortion contribution of a given interval to the overall distortion. The distortion loss resulting from a mismatch of source densities in the design of an asymptotically optimal sequence of quantizers is also determined. This extends Bucklew's fixed-rate (α=0\alpha=0) and Gray \emph{et al.}'s variable-rate (α=1\alpha=1)mismatch results to general values of the entropy order parameter $\alpha

    Study and simulation of low rate video coding schemes

    Get PDF
    The semiannual report is included. Topics covered include communication, information science, data compression, remote sensing, color mapped images, robust coding scheme for packet video, recursively indexed differential pulse code modulation, image compression technique for use on token ring networks, and joint source/channel coder design

    NOVEL EXPONENTIAL TYPE APPROXIMATIONS OF THE Q-FUNCTION

    Get PDF
    In this paper, we propose several solutions for approximating the Q-function using one exponential function or the sum of two exponential functions. As the novel Q-function approximations have simple analytical forms and are therefore very suitable for further derivation of expressions in closed forms, a large number of applications are feasible. The application of the novel exponential type approximations of the Q-function is especially important for overcoming issues arising in designing scalar companding quantizers for the Gaussian source, which are caused by the non-existence of a closed form expression for the Q-function. Since our approximations of the Q-function have simple analytical forms and are more accurate than the approximations of the Q-function previously used for the observed problem in the scalar companding quantization of the Gaussian source, their application, especially for this problem is of great importance

    Projektovanje kvantizera za primenu u obradi signala i neuronskim mrežama

    Get PDF
    Scalar quantizers are present in many advanced systems for signal processing and transmission, аnd their contribution is particular in the realization of the most important step in digitizing signals: the amplitude discretization. Accordingly, there are justified reasons for the development of innovative solutions, that is, quantizer models which offer reduced complexity, shorter processing time along with performance close to the standard quantizer models. Designing of a quantizer for a certain type of signal is a specific process and several new methods are proposed in the dissertation, which are computationally less intensive compared to the existing ones. Specifically, the design of different types of quantizers with low and high number of levels which apply variable and a fixed length coding, is considered. The dissertation is organized in such a way that it deals with the development of coding solutions for standard telecommunication signals (e.g. speech), as well as other types of signals such as neural network parameters. Many solutions, which belong to the class of waveform encoders, are proposed for speech coding. The developed solutions are characterized by low complexity and are obtained as a result of the implementation of new quantizer models in non-predictive and predictive coding techniques. The target of the proposed solutions is to enhance the performance of some standardized solutions or some advanced solutions with the same/similar complexity. Testing is performed using the speech examples extracted from the well-known databases, while performance evaluation of the proposed coding solutions is done by using the standard objective measures. In order to verify the correctness of the provided solutions, the matching between theoretical and experimental results is examined. In addition to speech coding, in dissertation are proposed some novel solutions based on the scalar quantizers for neural network compression. This is an active research area, whereby the role of quantization in this area is somewhat different than in the speech coding, and consists of providing a compromise between performance and accuracy of the neural network. Dissertation strictly deals with the low-levels (low-resolution) quantizers intended for post-training quantization, since they are more significant regarding compression. The goal is to improve the performance of the quantized neural network by using the novel designing methods for quantizers. The proposed quantizers are applied to several neural network models used for image classification (some benchmark dataset are used), and as performance measure the prediction accuracy along with SQNR is used. In fact, there was an effort to determine the connection between these two measures, which has not been investigated sufficiently so far

    Blockwise Transform Image Coding Enhancement and Edge Detection

    Get PDF
    The goal of this thesis is high quality image coding, enhancement and edge detection. A unified approach using novel fast transforms is developed to achieve all three objectives. Requirements are low bit rate, low complexity of implementation and parallel processing. The last requirement is achieved by processing the image in small blocks such that all blocks can be processed simultaneously. This is similar to biological vision. A major issue is to minimize the resulting block effects. This is done by using proper transforms and possibly an overlap-save technique. The bit rate in image coding is minimized by developing new results in optimal adaptive multistage transform coding. Newly developed fast trigonometric transforms are also utilized and compared for transform coding, image enhancement and edge detection. Both image enhancement and edge detection involve generalised bandpass filtering wit fast transforms. The algorithms have been developed with special attention to the properties of biological vision systems

    Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective

    Get PDF
    Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques
    corecore