13 research outputs found

    Selective deep convolutional neural network for low cost distorted image classification

    Get PDF
    Neural networks trained using images with a certain type of distortion should be better at classifying test images with the same type of distortion than generally-trained neural networks, given other factors being equal. Based on this observation, an ensemble of convolutional neural networks (CNNs) trained with different types and degrees of distortions is used. However, instead of simply classifying test images of unknown distortion types with the entire ensemble of CNNs, an extra tiny CNN is specifically trained to distinguish between the different types and degrees of distortions. Then, only the dedicated CNN for that specific type and degree of distortion, as determined by the tiny CNN, is activated and used to classify a possibly distorted test image. This proposed architecture, referred to as a \textit{selective deep convolutional neural network (DCNN)}, is implemented and found to result in high accuracy with low hardware costs. Detailed simulations with realistic image distortion scenarios using three popular datasets show that memory, MAC operations, and energy savings of up to 93.68%, 93.61%, and 91.92%, respectively, can be achieved with almost no reduction in image classification accuracy. The proposed selective DCNN scores up to 2.18x higher than the state-of-the-art DCNN model when evaluated using NetScore, a comprehensive metric that considers both CNN performance and hardware cost. In addition, it is shown that even higher hardware cost reduction can be achieved when selective DCNN is combined with previously proposed model compression techniques. Finally, experiments conducted with extended types and degrees of image distortion show that selective DCNN is highly scalable.11Ysciescopu

    Administration of Vitamin C in a Patient with Herpes Zoster - A case report -

    Get PDF
    Herpes zoster as a result of reactivated varicella-zoster virus is characterized by vesicular eruptions on skin and painful neuralgia in the dermatome distribution. Pain during an acute phase of herpes zoster has been associated with a higher risk of developing postherpetic neuralgia. The current therapies for herpes zoster including analgesics and sympathetic nerve block as well as antiviral agents are important to alleviate pain and prevent postherpetic neuralgia. However, in some cases, the pain does not respond well to these treatments. We had a case in which a patient with herpes zoster did not respond to conventional therapy so we attempted to administer intravenous infusion of vitamin C which resulted in an immediate reduction in the pain

    Rapid design space exploration of near-optimal memory-reduced DCNN architecture using multiple model compression techniques

    No full text
    In spite of the attractive accuracy, it is hard to use a deep convolutional neural network (DCNN) directly at the resource-limited devices due to the energy-consuming memory overheads, and thus the aggressive compression schemes are essentially utilized in practice to reduce the DCNN model size. As the recent methods have been individually developed, however, it is inevitable to exhaustively find the optimal combination of different approaches, requiring an enormous amount of search time. Given the complex baseline network, in this work, we introduce a rapid and systematic way to find the near-optimal memory-reduced DCNN option using multiple compression schemes together. We first precisely observe the accuracy-size trade-off of each method and make a novel interpolating scheme to speculate the accuracy of an arbitrary combination. We then present an iterative search algorithm to minimize the number of network evaluations for finding the memory-efficient DCNN structure satisfying the required accuracy. Experimental results reveal that our framework provides a similar compression level to the naive full-search strategy with three popular optimization methods while saving the search time by 7.35 times.2

    64bit RISC-V 기반 칩 검증 플랫폼 설계

    No full text
    2

    Layerwise Buffer Voltage Scaling for Energy-Efficient Convolutional Neural Network

    No full text
    In order to effectively reduce buffer energy consumption, which constitutes a significant part of the total energy consumption in a convolutional neural network (CNN), it is useful to apply different amounts of energy conservation effort to the different levels of a CNN as the buffer energy to total energy usage ratios can differ quite substantially across the layers of a CNN. This article proposes layerwise buffer voltage scaling as an effective technique for reducing buffer access energy. Error-resilience analysis, including interlayer effects, conducted during design-time is used to determine the specific buffer supply voltage to be used for each layer of a CNN. Then these layer-specific buffer supply voltages are used in the CNN for image classification inference. Error injection experiments with three different types of CNN architectures show that, with this technique, the buffer access energy and overall system energy can be reduced by up to 68.41% and 33.68%, respectively, without sacrificing image classification accuracy.11Nscopu

    Multi-level weight indexing scheme for memory-reduced convolutional neural network

    No full text
    1

    Approach to Improve the Performance Using Bit-level Sparsity in Neural Networks

    No full text
    This paper presents a convolutional neural network (CNN) accelerator that can skip zero weights and handle outliers, which are few but have a significant impact on the accuracy of CNNs, to achieve speedup and increase the energy efficiency of CNN. We propose an offline weight-scheduling algorithm which can skip zero weights and combine two non-outlier weights simultaneously using bit-level sparsity of CNNs. We use a reconfigurable multiplier-and-accumulator (MAC) unit for two purposes; usually used to compute combined two non-outliers and sometimes to compute outliers. We further improve the speedup of our accelerator by clipping some of the outliers with negligible accuracy loss. Compared to DaDianNao [7] and Bit-Tactical [16] architectures, our CNN accelerator can improve the speed by 3.34 and 2.31 times higher and reduce energy consumption by 29.3% and 30.2%, respectively.1
    corecore