2 research outputs found

    Restricted Clustered Neural Network for Storing Real Data

    No full text
    International audienc

    Architectures matérielles numériques intégrées et réseaux de neurones à codage parcimonieux

    Get PDF
    Nowadays, artificial neural networks are widely used in many applications such as image and signal processing. Recently, a new model of neural network was proposed to design associative memories, the GBNN (Gripon-Berrou Neural Network). This model offers a storage capacity exceeding those of Hopfield networks when the information to be stored has a uniform distribution. Methods improving performance for non-uniform distributions and hardware architectures implementing the GBNN networks were proposed. However, on one hand, these solutions are very expensive in terms of hardware resources and on the other hand, the proposed architectures can only implement fixed size networks and are not scalable. The objectives of this thesis are: (1) to design GBNN inspired models outperforming the state of the art, (2) to propose architectures cheaper than existing solutions and (3) to design a generic architecture implementing the proposed models and able to handle various sizes of networks. The results of these works are exposed in several parts. Initially, the concept of clone based neural networks and its variants are presented. These networks offer better performance than the state of the art for the same memory cost when a non-uniform distribution of the information to be stored is considered. The hardware architecture optimizations are then introduced to significantly reduce the cost in terms of resources. Finally, a generic scalable architecture able to handle various sizes of networks is proposed.De nos jours, les rĂ©seaux de neurones artificiels sont largement utilisĂ©s dans de nombreusesapplications telles que le traitement d’image ou du signal. RĂ©cemment, un nouveau modĂšlede rĂ©seau de neurones a Ă©tĂ© proposĂ© pour concevoir des mĂ©moires associatives, le GBNN(Gripon-Berrou Neural Network). Ce modĂšle offre une capacitĂ© de stockage supĂ©rieure Ă celle des rĂ©seaux de Hopfield lorsque les informations Ă  mĂ©moriser ont une distributionuniforme. Des mĂ©thodes amĂ©liorant leur performance pour des distributions non-uniformesainsi que des architectures matĂ©rielles mettant en Ɠuvre les rĂ©seaux GBNN ont Ă©tĂ©proposĂ©s. Cependant, ces solutions restent trĂšs coĂ»teuses en ressources matĂ©rielles, et lesarchitectures proposĂ©es sont restreintes Ă  des rĂ©seaux de tailles fixes et sont incapables depasser Ă  l’échelle.Les objectifs de cette thĂšse sont les suivants : (1) concevoir des modĂšles inspirĂ©s du modĂšle GBNN et plus performants que l’état de l’art, (2) proposer des architectures moins coĂ»teusesque les solutions existantes et (3) concevoir une architecture gĂ©nĂ©rique configurable mettanten Ɠuvre les modĂšles proposĂ©s et capable de manipuler des rĂ©seaux de tailles variables.Les rĂ©sultats des travaux de thĂšse sont exposĂ©s en plusieurs parties. Le concept de rĂ©seaux Ă clones de neurone et ses diffĂ©rentes instanciations sont prĂ©sentĂ©s dans un premier temps. CesrĂ©seaux offrent de meilleures performances que l’état de l’art pour un coĂ»t mĂ©moireidentique lorsqu’une distribution non-uniforme des informations Ă  mĂ©moriser estconsidĂ©rĂ©e. Des optimisations de l’architecture matĂ©rielle sont ensuite introduites afin defortement rĂ©duire le coĂ»t en termes de ressources. Enfin, une architecture gĂ©nĂ©rique capablede passer Ă  l’échelle et capable de manipuler des rĂ©seaux de tailles variables est proposĂ©e
    corecore