1,301 research outputs found

    ReBNet: Residual Binarized Neural Network

    Full text link
    This paper proposes ReBNet, an end-to-end framework for training reconfigurable binary neural networks on software and developing efficient accelerators for execution on FPGA. Binary neural networks offer an intriguing opportunity for deploying large-scale deep learning models on resource-constrained devices. Binarization reduces the memory footprint and replaces the power-hungry matrix-multiplication with light-weight XnorPopcount operations. However, binary networks suffer from a degraded accuracy compared to their fixed-point counterparts. We show that the state-of-the-art methods for optimizing binary networks accuracy, significantly increase the implementation cost and complexity. To compensate for the degraded accuracy while adhering to the simplicity of binary networks, we devise the first reconfigurable scheme that can adjust the classification accuracy based on the application. Our proposition improves the classification accuracy by representing features with multiple levels of residual binarization. Unlike previous methods, our approach does not exacerbate the area cost of the hardware accelerator. Instead, it provides a tradeoff between throughput and accuracy while the area overhead of multi-level binarization is negligible.Comment: To Appear In The 26th IEEE International Symposium on Field-Programmable Custom Computing Machine

    Toolflows for Mapping Convolutional Neural Networks on FPGAs: A Survey and Future Directions

    Get PDF
    In the past decade, Convolutional Neural Networks (CNNs) have demonstrated state-of-the-art performance in various Artificial Intelligence tasks. To accelerate the experimentation and development of CNNs, several software frameworks have been released, primarily targeting power-hungry CPUs and GPUs. In this context, reconfigurable hardware in the form of FPGAs constitutes a potential alternative platform that can be integrated in the existing deep learning ecosystem to provide a tunable balance between performance, power consumption and programmability. In this paper, a survey of the existing CNN-to-FPGA toolflows is presented, comprising a comparative study of their key characteristics which include the supported applications, architectural choices, design space exploration methods and achieved performance. Moreover, major challenges and objectives introduced by the latest trends in CNN algorithmic research are identified and presented. Finally, a uniform evaluation methodology is proposed, aiming at the comprehensive, complete and in-depth evaluation of CNN-to-FPGA toolflows.Comment: Accepted for publication at the ACM Computing Surveys (CSUR) journal, 201

    Probabilistic Principle Component Analysis based Feature Extraction of Embedded System Applications with Deep Neural Network based Implementation in FPGA

    Get PDF
    The study of hardware and software systems is of major are very important advent in new devices for communication and progress in system of security. In fast pace mobile and embedded devices application in every day’s life leads some new emerging area for research in data mining field. In this we have some technologies which have demand and error free using the principle of component of PPCA. For Embedded system the applications of PCA is basically applied initially for the lessen the having different qualities especially being to simple of the data. PPCA which have the updated version of PCA which is surveyed by similarity measure. In this work, experiments are extensively carried out, using a FPGA based light weight cryptographic data set having benchmark set to check and illustrate the viability, competence, litheness which are reconfigurable embedded system which are having data mining . Which have FPGA are reconfigurable for the computing architectures for hardware and in neural network. FPGA using the multilayer Cascaded for neural network which are forward in nature (CFFNN) and Deep Neural Network also called as DNN with a huge neuron is still a thought-provoking task. This shortcoming leads to elect the FPGA capacity for a particular application we have used the method of implementation which has two neural network have been implemented and compared , namely, CFFNN and DNN. It can be shown that for reconfigurable embedded system, PPCA based data mining and Machine learning based realization can give more speed up less iteration and more space savings when we have compared it with the static conventional version
    • …
    corecore