7 research outputs found

    Towards Robust Design and Training of Deep Neural Networks

    Get PDF
    Currently neural networks run as software, which typically requires expensive GPU resources. As the adoption of deep learning continues for a more diverse range of applications, direct hardware implemented neural networks (HNN) will provide deep learning solutions at far lower hardware requirements. However, Gaussian noise along hardware connections degrades model accuracy, an issue this research seeks to resolve using a novel analog error correcting code (ECC). To aid in developing noise tolerant deep neural networks (DNN), this research also investigates the impact of loss functions on training. This involves alternating multiple loss functions throughout training, aiming to prevent local optimals. The effects on training time and final accuracy are then analyzed. This research investigates analog ECCs and loss function variation to allow for future noise tolerant HNN networks. ECC results demonstrate three to five decibel improvements to model accuracy when correcting Gaussian noise. Loss variation results demonstrate a correlation between loss function similarity and training performance. Other correlations are also presented and addressed

    Design and application of reconfigurable circuits and systems

    No full text
    Open Acces

    Sampling from the Multivariate Gaussian Distribution using Reconfigurable Hardware

    Full text link

    Towards Robust Design and Training of Deep Neural Networks

    Get PDF
    Currently neural networks run as software, which typically requires expensive GPU resources. As the adoption of deep learning continues for a more diverse range of applications, direct hardware implemented neural networks (HNN) will provide deep learning solutions at far lower hardware requirements. However, Gaussian noise along hardware connections degrades model accuracy, an issue this research seeks to resolve using a novel analog error correcting code (ECC). To aid in developing noise tolerant deep neural networks (DNN), this research also investigates the impact of loss functions on training. This involves alternating multiple loss functions throughout training, aiming to prevent local optimals. The effects on training time and final accuracy are then analyzed. This research investigates analog ECCs and loss function variation to allow for future noise tolerant HNN networks. ECC results demonstrate three to five decibel improvements to model accuracy when correcting Gaussian noise. Loss variation results demonstrate a correlation between loss function similarity and training performance. Other correlations are also presented and addressed

    A Compact and Accurate Gaussian Variate Generator

    Full text link

    Génération de nombres pseudo-aléatoires suivant une distribution non-uniforme par circuits intégrés programmables

    Get PDF
    Génération de la distribution uniforme -- Génération des distributions non-uniformes -- Architectures matérielles des générateurs de nombres aléatoires -- Qualification d'un générateur non-uniforme -- Principe de base du modèle -- Développement du modèle mathématique -- Architectures des générateurs aléatoires -- Architecture universelle -- Application à la distribution exponentielle -- Application à la distribution normale -- Implémentations et résultats expérimentaux -- Simulation algorithmique -- Accélération matérielle
    corecore