59 research outputs found

    Neural network-based vehicle image classification for IoT devices

    Get PDF
    Convolutional Neural Networks (CNNs) have previously provided unforeseen results in automatic image analysis and interpretation, an area which has numerous applications in both consumer electronics and industry. However, the signal processing related to CNNs is computationally very demanding, which has prohibited their use in the smallest embedded computing platforms, to which many Internet of Things (IoT) devices belong. Fortunately, in the recent years researchers have developed many approaches for optimizing the performance and for shrinking the memory footprint of CNNs. This paper presents a neuralnetwork-based image classifier that has been trained to classify vehicle images into four different classes. The neural network is optimized by a technique called binarization, and the resulting binarized network is placed to an IoT-class processor core for execution. Binarization reduces the memory footprint of the CNN by around 95% and increases performance by more than 6×. Furthermore, we show that by utilizing a custom instruction 'popcount' of the processor, the performance of the binarized vehicle classifier can still be increased by more than 2×, making the CNN-based image classifier suitable for the smallest embedded processors.©2029 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.fi=vertaisarvioitu|en=peerReviewed

    Low-Cost Deep Convolutional Neural Network Acceleration with Stochastic Computing and Quantization

    Get PDF
    Department of Computer Science and EngineeringFor about a decade, image classification performance leaded by deep convolutional neural networks (DCNNs) has achieved dramatic advancement. However, its excessive computational complexity requires much hardware cost and energy. Accelerators which consist of a many-core neural processing unit are appearing to compute DCNNs energy efficiently than conventional processors (e.g., CPUs and GPUs). However, a huge amount of general-purpose precision computations is still tough for mobile and edge devices. Therefore, there have been many researches to simplify DCNN computations, especially multiply-accumulate (MAC) operations that account for most processing time. Apart from conventional binary computing and as a promising alternative, stochastic computing (SC) was studied steadily for low-cost arithmetic operations. However, previous SC-DCNN approaches have critical limitations such as lack of scalability and accuracy loss. This dissertation first offers solutions to overcome those problems. Furthermore, SC has additional advantages over binary computing such as error tolerance. Those strengths are exploited and assessed in the dissertation. Meanwhile, quantization which replaces high precision dataflow by low-bit representation and arithmetic operations becomes popular for reduction of DCNN model size and computation cost. Currently, low-bit fixed-point representation is popularly used. The dissertation argues that SC and quantization are mutually beneficial. In other words, efficiency of SC-DCNN can be improved by usual quantization as the conventional binary computing does and a flexible SC feature can exploit quantization more effectively than the binary computing. Besides, more advanced quantization methods are emerging. In accordance with those, novel SC-MAC structures are devised to attain the benefits. For each contribution, RTL implemented SC accelerators are evaluated and compared with conventional binary implementations. Also, a small FPGA prototype demonstrates the viability of SC-DCNN. In a rapidly changing and developing deep learning world headed by conventional binary computing, multifariously enhanced SC, though not as popular as binary, is still competitive implementation with its own benefits.ope
    corecore