94 research outputs found

    A Similarity-based Normative Framework for Bio-plausible Neural Nets

    Get PDF
    In the last decade, Artificial Neural Nets (ANNs), rebranded as Deep Learning, have revolutionized the field of Artificial Intelligence. While these neural nets have their origin in analogy with the neural networks in the brain, in many ways they are trained in ways that are very different from how real neurons learn. For example, to date there is no satisfactory biologically plausible mechanism for backpropagation, the workhorse for training ANNs. Motivated by this gap, we have looked at alternative normative approaches to neural networks that could give rise to more plausible learning rules. One such approach, which works rather well for representation learning problems, is based on similarity matching or kernel alignment. In this approach, one demands that similar sensory inputs produce similar neural activities. From this rather limited constraint, one can give rise to interesting neural networks performing many common unsupervised learning tasks. I will illustrate, in particular, the case of representing continuous manifolds like spatial information. Here , this approach produces representations very much like place cells in the hippocampus. Consequences of our theory and its relations to some experiments would be discussed. Time permitting, I would touch upon the role of similarity matching in current work in ANNs as well.Book of abstract: 4th Belgrade Bioinformatics Conference, June 19-23, 202
    corecore