302 research outputs found
Difference target propagation
Backpropagation has been the workhorse of recent successes of deep learning but it relies on infinitesimal effects (partial derivatives) in order to perform credit assignment. This could become a serious issue as one considers deeper and more non-linear functions, e.g., consider the extreme case of non-linearity where the relation between parameters and cost is actually discrete.
Inspired by the biological implausibility of Backpropagation, this thesis proposes a novel approach,
Target Propagation. The main idea is to compute targets rather than gradients, at each layer in which feedforward and feedback networks form Auto-Encoders.
We show that a linear correction for the imperfectness of the Auto-Encoders, called Difference Target Propagation is very effective to make Target Propagation actually work, leading to results comparable to Backpropagation for deep networks with discrete and continuous units, Denoising Auto-Encoders and achieving state of the art for stochastic networks.
In Chapters 1, we introduce several classical learning rules in Deep Neural Networks, including Backpropagation and more biological plausible learning rules. In Chapters 2 and 3, we introduce a novel approach, Target Propagation, more biological plausible learning rule than Backpropagation. In addition, we show that Target Propagation is comparable to Backpropagation in Deep Neural Networks.L'algorithme de r etropropagation a et e le cheval de bataille du succ es r ecent
de l'apprentissage profond, mais elle s'appuie sur des e ets in nit esimaux (d eriv ees
partielles) a n d'e ectuer l'attribution de cr edit. Cela pourrait devenir un probl eme
s erieux si l'on consid ere des fonctions plus profondes et plus non lin eaires, avec a
l'extr^eme la non-lin earit e o u la relation entre les param etres et le co^ut est r eellement
discr ete.
Inspir ee par la pr esum ee invraisemblance biologique de la r etropropagation,
cette th ese propose une nouvelle approche, Target Propagation. L'id ee principale
est de calculer des cibles plut^ot que des gradients a chaque couche, en faisant en
sorte que chaque paire de couches successive forme un auto-encodeur.
Nous montrons qu'une correction lin eaire, appel ee Di erence Target Propaga-
tion, est tr es e cace, conduisant a des r esultats comparables a la r etropropagation
pour les r eseaux profonds avec des unit es discr etes et continues et des auto- encodeurs
et atteignant l' etat de l'art pour les r eseaux stochastiques
A Theoretical Framework for Target Propagation
The success of deep learning, a brain-inspired form of AI, has sparked
interest in understanding how the brain could similarly learn across multiple
layers of neurons. However, the majority of biologically-plausible learning
algorithms have not yet reached the performance of backpropagation (BP), nor
are they built on strong theoretical foundations. Here, we analyze target
propagation (TP), a popular but not yet fully understood alternative to BP,
from the standpoint of mathematical optimization. Our theory shows that TP is
closely related to Gauss-Newton optimization and thus substantially differs
from BP. Furthermore, our analysis reveals a fundamental limitation of
difference target propagation (DTP), a well-known variant of TP, in the
realistic scenario of non-invertible neural networks. We provide a first
solution to this problem through a novel reconstruction loss that improves
feedback weight training, while simultaneously introducing architectural
flexibility by allowing for direct feedback connections from the output to each
hidden layer. Our theory is corroborated by experimental results that show
significant improvements in performance and in the alignment of forward weight
updates with loss gradients, compared to DTP.Comment: 13 pages and 4 figures in main manuscript; 41 pages and 8 figures in
supplementary materia
Characterizing the variation of propagation constants in multicore fibre
We demonstrate a numerical technique that can evaluate the core-to-core
variations in propagation constant in multicore fibre. Using a Markov Chain
Monte Carlo process, we replicate the interference patterns of light that has
coupled between the cores during propagation. We describe the algorithm and
verify its operation by successfully reconstructing target propagation
constants in a fictional fibre. Then we carry out a reconstruction of the
propagation constants in a real fibre containing 37 single-mode cores. We find
that the range of fractional propagation constant variation across the cores is
approximately .Comment: 17 pages; preprint format; 5 figures. Submitted to Optics Expres
- …