2 research outputs found

    Multiplierless and Sparse Machine Learning based on Margin Propagation Networks

    Full text link
    The new generation of machine learning processors have evolved from multi-core and parallel architectures that were designed to efficiently implement matrix-vector-multiplications (MVMs). This is because at the fundamental level, neural network and machine learning operations extensively use MVM operations and hardware compilers exploit the inherent parallelism in MVM operations to achieve hardware acceleration on GPUs and FPGAs. However, many IoT and edge computing platforms require embedded ML devices close to the network in order to compensate for communication cost and latency. Hence a natural question to ask is whether MVM operations are even necessary to implement ML algorithms and whether simpler hardware primitives can be used to implement an ultra-energy-efficient ML processor/architecture. In this paper we propose an alternate hardware-software codesign of ML and neural network architectures where instead of using MVM operations and non-linear activation functions, the architecture only uses simple addition and thresholding operations to implement inference and learning. At the core of the proposed approach is margin-propagation (MP) based computation that maps multiplications into additions and additions into a dynamic rectifying-linear-unit (ReLU) operations. This mapping results in significant improvement in computational and hence energy cost. In this paper, we show how the MP network formulation can be applied for designing linear classifiers, shallow multi-layer perceptrons and support vector networks suitable fot IoT platforms and tiny ML applications. We show that these MP based classifiers give comparable results to that of their traditional counterparts for benchmark UCI datasets, with the added advantage of reduction in computational complexity enabling an improvement in energy efficiency.Comment: New results adde

    Sparse Decoding of Low Density Parity Check Codes Using Margin Propagation

    No full text
    Abstract—One of the key factors underlying the popularity of Low-density parity-check (LDPC) code is its iterative decoding algorithm that is amenable to efficient hardware implementation. Even though different variants of LDPC iterative decoding algorithms have been studied for its error-correcting properties, an analytical basis for evaluating energy efficiency of LDPC decoders has not been reported. In this paper, we present a framework of a parameterized LDPC decoding algorithm that can be optimized to produce sparse representation of communication messages used in iterative decoding. The sparsity of messages is determined by its differential entropy and has been used as a theoretical metric for determining the energy efficiency of an iterative LDPC decoder. At the core of the proposed algorithm is margin propagation (MP) which approximates the log-sum-exp function used in conventional sum-product (SP) decoders by a piecewise linear (PWL) function. Using Monte-Carlo simulations, we demonstrate that the MP decoding leads to a significant reduction in message entropy compared to a conventional SP decoder, while incurring a negligible performance penalty (less than 0.03dB). The proposed work therefore lays the foundation for design of parameterized LDPC decoders whose bit-errorrate performance can be effectively traded-off with respect to different energy efficiency constraints as required by different set of applications
    corecore