8 research outputs found

    Humans Verification by Adopting Deep Recurrent Fingerphotos Network

    Get PDF
    يمكن اعتبار صورة الإصبع واحدة من أحدث وأكثر التقنيات البيومترية إثارة للاهتمام. يعني ذلك ببساطة صورة بصمة أصبع يتم الحصول عليها عن طريق هاتف ذكي بطريقة لا تتطلب الاتصال المباشر. يقترح هذا البحث نهجًا جديدًا للتحقق من البشر استنادًا إلى صورة الإصبع الفوتوغرافية. يُطلق عليه اسم شبكة الإصبع الفوتوغرافية العميقة المتكررة. تتألف من طبقة الإدخال، وسلسلة من الطبقات الخفية، وطبقة الإخراج والتغذية العكسية الاساسية. يعتمد هذا البحث على اخذ صور فوتوغرافية لكافة الاصابع الشخصية بشكل متسلسل. و يتمتع النظام بالقدرة على التبديل بين أوزان كل إصبع فوتوغرافي فردي وتوفير التحقق. تم انشاء قاعدة بينات من عدد كبير من صور الأصابع الفوتوغرافية، وتم تنظيمها وتقسيمها واستخدامها كمجموعة بيانات مفيدة في هذا البحث. تم التوصل الى نتائج عالية في الدقة  في التحقق الشخصي عن طريق استخدام الصور الفوتوغرافية للاصابع.Fingerphoto can be considered as one of recent and interesting biometrics. It basically means a fingerprint image that is acquired by a smartphone in contactless manner. This paper proposes a new Deep Recurrent Learning (DRL) approach for verifying humans based on their fingerphoto image. It is called the Deep Recurrent Fingerphotos Network (DRFN). It compromises of input layer, sequence of hidden layers, output layer and essential feedback. The proposed DRFN sequentially accepts fingerphoto images of all personal fingers. It has the capability to change between the weights of each individual fingerphoto and provide verification. A huge number of fingerphoto images have been acquired, arranged, segmented and utilized as a useful dataset in this paper. It is named the Fingerphoto Images of Ten Fingers (FITF) dataset. Average accuracy result of 99.84 % is obtained for personal verification by exploiting fingerphotos

    ADIC: Anomaly Detection Integrated Circuit in 65nm CMOS utilizing Approximate Computing

    Full text link
    In this paper, we present a low-power anomaly detection integrated circuit (ADIC) based on a one-class classifier (OCC) neural network. The ADIC achieves low-power operation through a combination of (a) careful choice of algorithm for online learning and (b) approximate computing techniques to lower average energy. In particular, online pseudoinverse update method (OPIUM) is used to train a randomized neural network for quick and resource efficient learning. An additional 42% energy saving can be achieved when a lighter version of OPIUM method is used for training with the same number of data samples lead to no significant compromise on the quality of inference. Instead of a single classifier with large number of neurons, an ensemble of K base learner approach is chosen to reduce learning memory by a factor of K. This also enables approximate computing by dynamically varying the neural network size based on anomaly detection. Fabricated in 65nm CMOS, the ADIC has K = 7 Base Learners (BL) with 32 neurons in each BL and dissipates 11.87pJ/OP and 3.35pJ/OP during learning and inference respectively at Vdd = 0.75V when all 7 BLs are enabled. Further, evaluated on the NASA bearing dataset, approximately 80% of the chip can be shut down for 99% of the lifetime leading to an energy efficiency of 0.48pJ/OP, an 18.5 times reduction over full-precision computing running at Vdd = 1.2V throughout the lifetime.Comment: 1

    CARLA: A Convolution Accelerator with a Reconfigurable and Low-Energy Architecture

    Full text link
    Convolutional Neural Networks (CNNs) have proven to be extremely accurate for image recognition, even outperforming human recognition capability. When deployed on battery-powered mobile devices, efficient computer architectures are required to enable fast and energy-efficient computation of costly convolution operations. Despite recent advances in hardware accelerator design for CNNs, two major problems have not yet been addressed effectively, particularly when the convolution layers have highly diverse structures: (1) minimizing energy-hungry off-chip DRAM data movements; (2) maximizing the utilization factor of processing resources to perform convolutions. This work thus proposes an energy-efficient architecture equipped with several optimized dataflows to support the structural diversity of modern CNNs. The proposed approach is evaluated by implementing convolutional layers of VGGNet-16 and ResNet-50. Results show that the architecture achieves a Processing Element (PE) utilization factor of 98% for the majority of 3x3 and 1x1 convolutional layers, while limiting latency to 396.9 ms and 92.7 ms when performing convolutional layers of VGGNet-16 and ResNet-50, respectively. In addition, the proposed architecture benefits from the structured sparsity in ResNet-50 to reduce the latency to 42.5 ms when half of the channels are pruned.Comment: 12 page

    Energy-Efficient, Flexible and Fast Architectures for Deep Convolutional Neural Network Acceleration

    Get PDF
    RÉSUMÉ: Les méthodes basées sur l'apprentissage profond, et en particulier les réseaux de neurones convolutifs (CNN), ont révolutionné le domaine de la vision par ordinateur. Alors que jusqu'en 2012, les méthodes de traitement d'image traditionnelles les plus précises pouvaient atteindre 26% d'erreurs dans la reconnaissance d'images sur l'étalon normalisé et bien connu ImageNet, une méthode basée sur un CNN a considérablement réduit l'erreur à 16%. En faisant évoluer la structure des CNN, les méthodes actuelles basées sur des CNN atteignent désormais couramment des taux d'erreur inférieurs à 3%, dépassant souvent la précision humaine. Les CNN se composent de nombreuses couches convolutives, chacune effectuant des opérations de convolution complexes de haute dimension. Pour obtenir une précision élevée en reconnaissance d’images, les CNN modernes empilent de nombreuses couches convolutives, ce qui augmente considérablement la diversité des motifs de calcul entre les couches. Ce haut niveau de complexité dans les CNN implique un nombre massif de paramètres et de calculs.----------ABSTRACT: Deep learning-based methods, and specifically Convolutional Neural Networks (CNNs), have revolutionized the field of computer vision. While until 2012, the most accurate traditional image processing methods could reach 26% errors in recognizing images on the standardized and well-known ImageNet benchmark, a CNN-based method dramatically reduced the error to 16%. By evolving CNNs structures, current CNN-based methods now routinely achieve error rates below 3%, often outperforming human level accuracy. CNNs consist of many convolutional layers each performing high dimensional complex convolution operations. To achieve high image recognition accuracy, modern CNNs stack many convolutional layers which dramatically increases computation pattern diversity across layers. This high level of complexity in CNNs implies massive numbers of parameters and computations. Since mobile processors are not designed to perform massive computations, deploying CNNs on portable and mobile devices is challenging
    corecore