2 research outputs found

    Implementation of 14 bits floating point numbers of calculating units for neural network hardware development

    Get PDF
    An important aspect of modern automation is machine learning. Specifically, neural networks are used for environment analysis and decision making based on available data. This article covers the most frequently performed operations on floating-point numbers in artificial neural networks. Also, a selection of the optimum value of the bit to 14-bit floating-point numbers for implementation on FPGAs was submitted based on the modern architecture of integrated circuits. The description of the floating-point multiplication (multiplier) algorithm was presented. In addition, features of the addition (adder) and subtraction (subtractor) operations were described in the article. Furthermore, operations for such variety of neural networks as a convolution network - mathematical comparison of a floating point ('less than' and 'greater than or equal') were presented. In conclusion, the comparison with calculating units of Atlera was made

    FPGA design of the fast decoder for burst errors correction

    Get PDF
    The paper is about FPGA design of the fast single stage decoder for correcting burst errors during data transmission. The decoder allows correcting burst errors with 3 bits for a 15 bit codeword and a 7 bit check unit. The description of a generator polynomial search algorithm for building error-correcting codes was represented. The module structure of the decoder was designed for FPGA implementation. There are modules, such as remainder, check_pattern, decoder2, implemented by asynchronous combinational circuits without memory elements, and they process each codeword shift in parallel. Proposed implementation allows getting high performance about ~20 ns
    corecore