7 research outputs found

    Projektovanje kvantizera za primenu u obradi signala i neuronskim mrežama

    Get PDF
    Scalar quantizers are present in many advanced systems for signal processing and transmission, аnd their contribution is particular in the realization of the most important step in digitizing signals: the amplitude discretization. Accordingly, there are justified reasons for the development of innovative solutions, that is, quantizer models which offer reduced complexity, shorter processing time along with performance close to the standard quantizer models. Designing of a quantizer for a certain type of signal is a specific process and several new methods are proposed in the dissertation, which are computationally less intensive compared to the existing ones. Specifically, the design of different types of quantizers with low and high number of levels which apply variable and a fixed length coding, is considered. The dissertation is organized in such a way that it deals with the development of coding solutions for standard telecommunication signals (e.g. speech), as well as other types of signals such as neural network parameters. Many solutions, which belong to the class of waveform encoders, are proposed for speech coding. The developed solutions are characterized by low complexity and are obtained as a result of the implementation of new quantizer models in non-predictive and predictive coding techniques. The target of the proposed solutions is to enhance the performance of some standardized solutions or some advanced solutions with the same/similar complexity. Testing is performed using the speech examples extracted from the well-known databases, while performance evaluation of the proposed coding solutions is done by using the standard objective measures. In order to verify the correctness of the provided solutions, the matching between theoretical and experimental results is examined. In addition to speech coding, in dissertation are proposed some novel solutions based on the scalar quantizers for neural network compression. This is an active research area, whereby the role of quantization in this area is somewhat different than in the speech coding, and consists of providing a compromise between performance and accuracy of the neural network. Dissertation strictly deals with the low-levels (low-resolution) quantizers intended for post-training quantization, since they are more significant regarding compression. The goal is to improve the performance of the quantized neural network by using the novel designing methods for quantizers. The proposed quantizers are applied to several neural network models used for image classification (some benchmark dataset are used), and as performance measure the prediction accuracy along with SQNR is used. In fact, there was an effort to determine the connection between these two measures, which has not been investigated sufficiently so far

    Развој кодера таласног облика за потребе неуронских мрежа и обраду сигнала

    Get PDF
    This doctoral thesis aims to design low-bit scalar quantizers and analyze their application in Neural Networks (NNs) and signal processing. In this thesis, we consider the possibilities and limitations that rest on quantization, as a leading technique for data coding and compression. In particular, we examine the inevitable accuracy loss of signal and data presentation due to quantization in the signal processing area, as well as in many modern solutions, that use quantization. As stated in this thesis, there are a number of qualitative performance indicators, which indicate that appropriate quantizer parameterization can optimize the amount of data transmitted in bits. Quantized Neural Networks (QNNs) is a promising research area, especially important for resource constrained devices. Relying on a plethora of conclusions about scalar quantizers derived for signal processing tasks and taking into account the advantages of scalar quantization, we anticipate that by studying the statistical characteristics of neural network parameters, this thesis will contribute to determining an efficient weights compression solution utilizing new, well-designed scalar quantizers for post-training quantization

    Fractal image compression and the self-affinity assumption : a stochastic signal modelling perspective

    Get PDF
    Bibliography: p. 208-225.Fractal image compression is a comparatively new technique which has gained considerable attention in the popular technical press, and more recently in the research literature. The most significant advantages claimed are high reconstruction quality at low coding rates, rapid decoding, and "resolution independence" in the sense that an encoded image may be decoded at a higher resolution than the original. While many of the claims published in the popular technical press are clearly extravagant, it appears from the rapidly growing body of published research that fractal image compression is capable of performance comparable with that of other techniques enjoying the benefit of a considerably more robust theoretical foundation. . So called because of the similarities between the form of image representation and a mechanism widely used in generating deterministic fractal images, fractal compression represents an image by the parameters of a set of affine transforms on image blocks under which the image is approximately invariant. Although the conditions imposed on these transforms may be shown to be sufficient to guarantee that an approximation of the original image can be reconstructed, there is no obvious theoretical reason to expect this to represent an efficient representation for image coding purposes. The usual analogy with vector quantisation, in which each image is considered to be represented in terms of code vectors extracted from the image itself is instructive, but transforms the fundamental problem into one of understanding why this construction results in an efficient codebook. The signal property required for such a codebook to be effective, termed "self-affinity", is poorly understood. A stochastic signal model based examination of this property is the primary contribution of this dissertation. The most significant findings (subject to some important restrictions} are that "self-affinity" is not a natural consequence of common statistical assumptions but requires particular conditions which are inadequately characterised by second order statistics, and that "natural" images are only marginally "self-affine", to the extent that fractal image compression is effective, but not more so than comparable standard vector quantisation techniques

    The Shallow and the Deep:A biased introduction to neural networks and old school machine learning

    Get PDF
    The Shallow and the Deep is a collection of lecture notes that offers an accessible introduction to neural networks and machine learning in general. However, it was clear from the beginning that these notes would not be able to cover this rapidly changing and growing field in its entirety. The focus lies on classical machine learning techniques, with a bias towards classification and regression. Other learning paradigms and many recent developments in, for instance, Deep Learning are not addressed or only briefly touched upon.Biehl argues that having a solid knowledge of the foundations of the field is essential, especially for anyone who wants to explore the world of machine learning with an ambition that goes beyond the application of some software package to some data set. Therefore, The Shallow and the Deep places emphasis on fundamental concepts and theoretical background. This also involves delving into the history and pre-history of neural networks, where the foundations for most of the recent developments were laid. These notes aim to demystify machine learning and neural networks without losing the appreciation for their impressive power and versatility
    corecore