Exploiter l'incertitude lors de l'apprentissage par propagation d'équilibre dans les réseaux de neurones

Abstract

8 pages, 5 figuresEquilibrium Propagation (EP) is a supervised learning algorithm that trains network parameters using local neuronal activity. This is in stark contrast to backpropagation, where updating the parameters of the network requires significant data shuffling. Avoiding data movement makes EP particularly compelling as a learning framework for energy-efficient training on neuromorphic systems. In this work, we assess the ability of EP to learn on hardware that contain physical uncertainties. This is particularly important for researchers concerned with hardware implementations of self-learning systems that utilize EP. Our results demonstrate that deep, multi-layer neural network architectures can be trained successfully using EP in the presence of finite uncertainties, up to a critical limit. This limit is independent of the training dataset, and can be scaled through sampling the network according to the central limit theorem. Additionally, we demonstrate improved model convergence and performance for finite levels of uncertainty on the MNIST, KMNIST and FashionMNIST datasets. Optimal performance is found for networks trained with uncertainties close to the critical limit. Our research supports future work to build self-learning hardware in situ with EP

Similar works

Full text

thumbnail-image

HAL-CEA

redirect
Last time updated on 16/04/2025

This paper was published in HAL-CEA.

Having an issue?

Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.