151 research outputs found

    Complexity Analysis of Reed-Solomon Decoding over GF(2^m) Without Using Syndromes

    Get PDF
    For the majority of the applications of Reed-Solomon (RS) codes, hard decision decoding is based on syndromes. Recently, there has been renewed interest in decoding RS codes without using syndromes. In this paper, we investigate the complexity of syndromeless decoding for RS codes, and compare it to that of syndrome-based decoding. Aiming to provide guidelines to practical applications, our complexity analysis differs in several aspects from existing asymptotic complexity analysis, which is typically based on multiplicative fast Fourier transform (FFT) techniques and is usually in big O notation. First, we focus on RS codes over characteristic-2 fields, over which some multiplicative FFT techniques are not applicable. Secondly, due to moderate block lengths of RS codes in practice, our analysis is complete since all terms in the complexities are accounted for. Finally, in addition to fast implementation using additive FFT techniques, we also consider direct implementation, which is still relevant for RS codes with moderate lengths. Comparing the complexities of both syndromeless and syndrome-based decoding algorithms based on direct and fast implementations, we show that syndromeless decoding algorithms have higher complexities than syndrome-based ones for high rate RS codes regardless of the implementation. Both errors-only and errors-and-erasures decoding are considered in this paper. We also derive tighter bounds on the complexities of fast polynomial multiplications based on Cantor's approach and the fast extended Euclidean algorithm.Comment: 11 pages, submitted to EURASIP Journal on Wireless Communications and Networkin

    A novel computational approach to approximate fuzzy interpolation polynomials

    Get PDF
    This paper build a structure of fuzzy neural network, which is well sufficient to gain a fuzzy interpolation polynomial of the form yp=anxnp+⋯+a1xp+a0 where aj is crisp number (for j=0,…,n), which interpolates the fuzzy data (xj,yj)(forj=0,…,n). Thus, a gradient descent algorithm is constructed to train the neural network in such a way that the unknown coefficients of fuzzy polynomial are estimated by the neural network. The numeral experimentations portray that the present interpolation methodology is reliable and efficient

    Number theoretic techniques applied to algorithms and architectures for digital signal processing

    Get PDF
    Many of the techniques for the computation of a two-dimensional convolution of a small fixed window with a picture are reviewed. It is demonstrated that Winograd's cyclic convolution and Fourier Transform Algorithms, together with Nussbaumer's two-dimensional cyclic convolution algorithms, have a common general form. Many of these algorithms use the theoretical minimum number of general multiplications. A novel implementation of these algorithms is proposed which is based upon one-bit systolic arrays. These systolic arrays are networks of identical cells with each cell sharing a common control and timing function. Each cell is only connected to its nearest neighbours. These are all attractive features for implementation using Very Large Scale Integration (VLSI). The throughput rate is only limited by the time to perform a one-bit full addition. In order to assess the usefulness to these systolic arrays a 'cost function' is developed to compare them with more conventional techniques, such as the Cooley-Tukey radix-2 Fast Fourier Transform (FFT). The cost function shows that these systolic arrays offer a good way of implementing the Discrete Fourier Transform for transforms up to about 30 points in length. The cost function is a general tool and allows comparisons to be made between different implementations of the same algorithm and between dissimilar algorithms. Finally a technique is developed for the derivation of Discrete Cosine Transform (DCT) algorithms from the Winograd Fourier Transform Algorithm. These DCT algorithms may be implemented by modified versions of the systolic arrays proposed earlier, but requiring half the number of cells

    Exploring Hermite interpolation polynomials using recursion

    Get PDF
    In this paper we consider the teaching of Hermite interpolation. We propose here two nonstandard approaches for exploring Hermite interpolation polynomials in a computer supported environment. As an extension to the standard construction of the interpolation polynomials based on either on the fundamental polynomials or the triangular shaped divided difference table, we first investigate the generalization of the Neville type recursive scheme which may be familiar to the reader or to the student from the chapter about Lagrangian interpolation. Second, we propose an interactive demo tool where by the step-by-step construction of the interpolation polynomial, the interpolation constraints can be considered in an almost arbitrary order. Thus the same interpolating polynomial can be constructed in several different ways. As a by-product, one can also ask an interesting combinatorial problem from the students about the number of compatible orders of the constraints depending on the cardinality of node system

    Hardware Acceleration Technologies in Computer Algebra: Challenges and Impact

    Get PDF
    The objective of high performance computing (HPC) is to ensure that the computational power of hardware resources is well utilized to solve a problem. Various techniques are usually employed to achieve this goal. Improvement of algorithm to reduce the number of arithmetic operations, modifications in accessing data or rearrangement of data in order to reduce memory traffic, code optimization at all levels, designing parallel algorithms to reduce span are some of the attractive areas that HPC researchers are working on. In this thesis, we investigate HPC techniques for the implementation of basic routines in computer algebra targeting hardware acceleration technologies. We start with a sorting algorithm and its application to sparse matrix-vector multiplication for which we focus on work on cache complexity issues. Since basic routines in computer algebra often provide a lot of fine grain parallelism, we then turn our attention to manycore architectures on which we consider dense polynomial and matrix operations ranging from plain to fast arithmetic. Most of these operations are combined within a bivariate system solver running entirely on a graphics processing unit (GPU)

    Machine learning and fractal-based analysis for the automated diagnosis of cardiovascular diseases using magnetic resonance

    Full text link
    Treballs Finals de Grau d'Enginyeria Informàtica, Facultat de Matemàtiques, Universitat de Barcelona, Any: 2023, Director: Polyxeni Gkontra i Joan Carles Tatjer i Montaña[en] Cardiac magnetic resonance (CMR) is the reference imaging modality for the diagnose of cardiovascular diseases. Traditionally, simple CMR parameters related to the volume and shape of the cardiac structures are calculated by the medical professionals by means of manual or semi-automated approaches. This process is time-consuming and prone to human errors. Moreover, despite the importance of these traditional CMR indexes, they often fail to fully capture the complexity of the cardiac tissue. In this work, we propose a novel approach for automated cardiovascular disease diagnosis, using ischemic heart disease as an example use case. Towards this aim, we will use a state-of-the-art technology, supervised machine learning, and a promising mathematical tool, fractal-based analysis. In order to undertand the potential information that can be derived from fractal-based features, we introduce and explore the concepts of Haussdorff dimension, box-counting dimension and lacunarity. We describe the interrelationships among these concepts and present computational algorithms for calculating box-counting dimension and lacunarity. The study is based on data from a large-cohort study, UK Biobank, to extract box-counting dimension and lacunarity from CMR textures focusing on three cardiac structures of medical interest: the left ventricle, the right ventricle and the myocardium. The extraction of these features allows us to obtain quantitative parameters regarding the complexity and heterogeneity of the tissue. These fractal features, both individually and in conjunction with other vascular risk factors and CMR traditional indexes, are employed as inputs to state-of-the-art machine learning models, including SVM, XGBoost, and random forests. The objective is to determine if the inclusion of fractal features enhances the performance of currently employed parameters. The performance evaluation of our models is based on metrics such as balanced accuracy, F1 score, precision, and recall. The results obtained demonstrate the potential of fractal-based features in improving the accuracy and reliability of cardiovascular diseases diagnosis

    A 2D DWT architecture suitable for the Embedded Zerotree Wavelet Algorithm

    Get PDF
    Digital Imaging has had an enormous impact on industrial applications such as the Internet and video-phone systems. However, demand for industrial applications is growing enormously. In particular, internet application users are, growing at a near exponential rate. The sharp increase in applications using digital images has caused much emphasis on the fields of image coding, storage, processing and communications. New techniques are continuously developed with the main aim of increasing efficiency. Image coding is in particular a field of great commercial interest. A digital image requires a large amount of data to be created. This large amount of data causes many problems when storing, transmitting or processing the image. Reducing the amount of data that can be used to represent an image is the main objective of image coding. Since the main objective is to reduce the amount of data that represents an image, various techniques have been developed and are continuously developed to increase efficiency. The JPEG image coding standard has enjoyed widespread acceptance, and the industry continues to explore its various implementation issues. However, recent research indicates multiresolution based image coding is a far superior alternative. A recent development in the field of image coding is the use of Embedded Zerotree Wavelet (EZW) as the technique to achieve image compression. One of The aims of this theses is to explain how this technique is superior to other current coding standards. It will be seen that an essential part orthis method of image coding is the use of multi resolution analysis, a subband system whereby the subbands arc logarithmically spaced in frequency and represent an octave band decomposition. The block structure that implements this function is termed the two dimensional Discrete Wavelet Transform (2D-DWT). The 20 DWT is achieved by several architectures and these are analysed in order to choose the best suitable architecture for the EZW coder. Finally, this architecture is implemented and verified using the Synopsys Behavioural Compiler and recommendations are made based on experimental findings
    corecore