6 research outputs found

    Inducing and exploiting activation sparsity for fast neural network inference

    Get PDF
    Optimizing convolutional neural networks for fast inference has recently become an extremely active area of research. One of the go-to solutions in this context is weight pruning, which aims to reduce computational and memory footprint by removing large subsets of the connections in a neural network. Surprisingly, much less attention has been given to exploiting sparsity in the activation maps, which tend to be naturally sparse in many settings thanks to the structure of rectified linear (ReLU) activation functions. In this paper, we present an in-depth analysis of methods for maximizing the sparsity of the activations in a trained neural network, and show that, when coupled with an efficient sparse-input convolution algorithm, we can leverage this sparsity for significant performance gains. To induce highly sparse activation maps without accuracy loss, we introduce a new regularization technique, coupled with a new threshold-based sparsification method based on a parameterized activation function called Forced-Activation-Threshold Rectified Linear Unit (FATReLU). We examine the impact of our methods on popular image classification models, showing that most architectures can adapt to significantly sparser activation maps without any accuracy loss. Our second contribution is showing that these these compression gains can be translated into inference speedups: we provide a new algorithm to enable fast convolution operations over networks with sparse activations, and show that it can enable significant speedups for end-to-end inference on a range of popular models on the large-scale ImageNet image classification task on modern Intel CPUs, with little or no retraining cost

    EBPC: Extended Bit-Plane Compression for Deep Neural Network Inference and Training Accelerators

    Get PDF
    In the wake of the success of convolutional neural networks in image classification, object recognition, speech recognition, etc., the demand for deploying these compute-intensive ML models on embedded and mobile systems with tight power and energy constraints at low cost, as well as for boosting throughput in data centers, is growing rapidly. This has sparked a surge of research into specialized hardware accelerators. Their performance is typically limited by I/O bandwidth, power consumption is dominated by I/O transfers to off-chip memory, and on-chip memories occupy a large part of the silicon area. We introduce and evaluate a novel, hardware-friendly, and lossless compression scheme for the feature maps present within convolutional neural networks. We present hardware architectures and synthesis results for the compressor and decompressor in 65 nm. With a throughput of one 8-bit word/cycle at 600 MHz, they fit into 2.8 kGE and 3.0 kGE of silicon area, respectively - together the size of less than seven 8-bit multiply-add units at the same throughput. We show that an average compression ratio of 5.1 7 for AlexNet, 4 for VGG-16, 2.4 7 for ResNet-34 and 2.2 7 for MobileNetV2 can be achieved - a gain of 45-70% over existing methods. Our approach also works effectively for various number formats, has a low frame-to-frame variance on the compression ratio, and achieves compression factors for gradient map compression during training that are even better than for inference

    TRAINING NEURAL NETWORKS FOR VISUAL SERVOING

    Get PDF
    Visual servoing is a technique which uses feedback from vision sensor to dynamically manipulate the joints of the robot for motion and predicting required posture. The classical visual servoing applies several cameras and computer vision techniques for coordinating the motions of the robot. Therefore, it heavily relies on algorithms of feature extraction and tracking of coordinates position, processing visual features of the environment. The initial attempts of applying revolution of the computer vision, deep learning and convolutional neural networks were used in 2018 and achieved great results in prediction of posture of the robot on the image. In this thesis project I propose potential models which can be applicable in visual servoing without support of direct and classical methods of visual servoing and trained on synthetic dataset, which could be useful in diminishing robot hours. The results have shown great adaptability and resilience for fluctuations in images. Although the training process requires protracted time, the final model of the CNN with regressor output can accurately predict the pose of the robot both in output value positions and in simulation
    corecore