8 research outputs found

    Selective Decoding in Associative Memories Based on Sparse-Clustered Networks

    Full text link
    Associative memories are structures that can retrieve previously stored information given a partial input pattern instead of an explicit address as in indexed memories. A few hardware approaches have recently been introduced for a new family of associative memories based on Sparse-Clustered Networks (SCN) that show attractive features. These architectures are suitable for implementations with low retrieval latency, but are limited to small networks that store a few hundred data entries. In this paper, a new hardware architecture of SCNs is proposed that features a new data-storage technique as well as a method we refer to as Selective Decoding (SD-SCN). The SD-SCN has been implemented using a similar FPGA used in the previous efforts and achieves two orders of magnitude higher capacity, with no error-performance penalty but with the cost of few extra clock cycles per data access.Comment: 4 pages, Accepted in IEEE Global SIP 2013 conferenc

    Maximum Likelihood Associative Memories

    Full text link
    Associative memories are structures that store data in such a way that it can later be retrieved given only a part of its content -- a sort-of error/erasure-resilience property. They are used in applications ranging from caches and memory management in CPUs to database engines. In this work we study associative memories built on the maximum likelihood principle. We derive minimum residual error rates when the data stored comes from a uniform binary source. Second, we determine the minimum amount of memory required to store the same data. Finally, we bound the computational complexity for message retrieval. We then compare these bounds with two existing associative memory architectures: the celebrated Hopfield neural networks and a neural network architecture introduced more recently by Gripon and Berrou

    Nearly-optimal associative memories based on distributed constant weight codes

    No full text
    International audienceA new family of sparse neural networks achieving nearly optimal performance has been recently introduced. In these networks, messages are stored as cliques in clustered graphs. In this paper, we interpret these networks using the formalism of error correcting codes. To achieve this, we introduce two original codes, the thrifty code and the clique code, that are both sub-families of binary constant weight codes. We also provide the networks with an enhanced retrieving rule that enables a property of answer correctness and that improves performance

    Architectures matérielles numériques intégrées et réseaux de neurones à codage parcimonieux

    Get PDF
    Nowadays, artificial neural networks are widely used in many applications such as image and signal processing. Recently, a new model of neural network was proposed to design associative memories, the GBNN (Gripon-Berrou Neural Network). This model offers a storage capacity exceeding those of Hopfield networks when the information to be stored has a uniform distribution. Methods improving performance for non-uniform distributions and hardware architectures implementing the GBNN networks were proposed. However, on one hand, these solutions are very expensive in terms of hardware resources and on the other hand, the proposed architectures can only implement fixed size networks and are not scalable. The objectives of this thesis are: (1) to design GBNN inspired models outperforming the state of the art, (2) to propose architectures cheaper than existing solutions and (3) to design a generic architecture implementing the proposed models and able to handle various sizes of networks. The results of these works are exposed in several parts. Initially, the concept of clone based neural networks and its variants are presented. These networks offer better performance than the state of the art for the same memory cost when a non-uniform distribution of the information to be stored is considered. The hardware architecture optimizations are then introduced to significantly reduce the cost in terms of resources. Finally, a generic scalable architecture able to handle various sizes of networks is proposed.De nos jours, les rĂ©seaux de neurones artificiels sont largement utilisĂ©s dans de nombreusesapplications telles que le traitement d’image ou du signal. RĂ©cemment, un nouveau modĂšlede rĂ©seau de neurones a Ă©tĂ© proposĂ© pour concevoir des mĂ©moires associatives, le GBNN(Gripon-Berrou Neural Network). Ce modĂšle offre une capacitĂ© de stockage supĂ©rieure Ă celle des rĂ©seaux de Hopfield lorsque les informations Ă  mĂ©moriser ont une distributionuniforme. Des mĂ©thodes amĂ©liorant leur performance pour des distributions non-uniformesainsi que des architectures matĂ©rielles mettant en Ɠuvre les rĂ©seaux GBNN ont Ă©tĂ©proposĂ©s. Cependant, ces solutions restent trĂšs coĂ»teuses en ressources matĂ©rielles, et lesarchitectures proposĂ©es sont restreintes Ă  des rĂ©seaux de tailles fixes et sont incapables depasser Ă  l’échelle.Les objectifs de cette thĂšse sont les suivants : (1) concevoir des modĂšles inspirĂ©s du modĂšle GBNN et plus performants que l’état de l’art, (2) proposer des architectures moins coĂ»teusesque les solutions existantes et (3) concevoir une architecture gĂ©nĂ©rique configurable mettanten Ɠuvre les modĂšles proposĂ©s et capable de manipuler des rĂ©seaux de tailles variables.Les rĂ©sultats des travaux de thĂšse sont exposĂ©s en plusieurs parties. Le concept de rĂ©seaux Ă clones de neurone et ses diffĂ©rentes instanciations sont prĂ©sentĂ©s dans un premier temps. CesrĂ©seaux offrent de meilleures performances que l’état de l’art pour un coĂ»t mĂ©moireidentique lorsqu’une distribution non-uniforme des informations Ă  mĂ©moriser estconsidĂ©rĂ©e. Des optimisations de l’architecture matĂ©rielle sont ensuite introduites afin defortement rĂ©duire le coĂ»t en termes de ressources. Enfin, une architecture gĂ©nĂ©rique capablede passer Ă  l’échelle et capable de manipuler des rĂ©seaux de tailles variables est proposĂ©e
    corecore