3 research outputs found

    Serial Binary Multiplication with Feed-Forward Neural Networks

    No full text
    In this paper we propose no learning based neural networks for serial binary multiplication. We show that for "subarray-wise" generation of the partial product matrix and a data transmission rate of ffi bits per cycle the serial multiplication of two n-bit operands can be computed in \Sigma n \Upsilon serial cycles with an O(nffi) size neural network, and maximum fan-in and weight values both in the order of O(ffi log ffi). The minimum delay for this scheme is in the order of d n e + log n and it corresponds to a data transmission rate of d n e bits per cycle. For "column-wise" generation of the partial product matrix and a data transmission rate of 1 bit per cycle the serial multiplication can be achieved in 2n \Gamma 1 + (k + 1)dlog k ne delay with a (k + 1) size neural network, a maximum weight of 2 and a maximum fan-in of 3k + 1. If a data transmission rate of ffi bits per serial cycle is assumed we prove a delay of d e + (ffi + 1)dlog ne for a (ffi + 1)(n \Gamma 1) size neural network, a maximum weight of 2 and a maximum fan-in of 3ffi + 1

    Serial Binary Multiplication with Feed-Forward Neural Networks

    No full text
    In this paper we propose no learning based neural networks for serial binary multiplication. We show that for "subarray-wise" generation of the partial product matrix and a data transmission rate of ffi-bit per cycle the serial multiplication of two n-bit operands can be computed in \Sigma n ffi \Upsilon serial cycles with an O(nffi) size neural network, and maximum fan-in and weight values both in the order of O(ffi log ffi). The minimum delay for this scheme is in the order of d p n e + log n and it corresponds to a data transmission rate of d p n e-bit per cycle. For "column-wise" generation of the partial product matrix and a data transmission rate of 1-bit per cycle the serial multiplication can be achieved in 2n \Gamma 1 + (k + 1)dlog k ne delay with a (k + 1) n\Gamma1 k\Gamma1 size neural network, a maximum weight of 2 k and a maximum fan-in of 3k + 1. If a data transmission rate of ffi-bit per serial cycle is assumed we prove a delay of d 2n\Gamma1 ffi e + (ffi + ..
    corecore