48 research outputs found
LOSSGRAD: automatic learning rate in gradient descent
In this paper, we propose a simple, fast and easy to implement algorithm
LOSSGRAD (locally optimal step-size in gradient descent), which automatically
modifies the step-size in gradient descent during neural networks training.
Given a function , a point , and the gradient of , we aim
to find the step-size which is (locally) optimal, i.e. satisfies: Making use of quadratic
approximation, we show that the algorithm satisfies the above assumption. We
experimentally show that our method is insensitive to the choice of initial
learning rate while achieving results comparable to other methods.Comment: TFML 201
Visualising Basins of Attraction for the Cross-Entropy and the Squared Error Neural Network Loss Functions
Quantification of the stationary points and the associated basins of
attraction of neural network loss surfaces is an important step towards a
better understanding of neural network loss surfaces at large. This work
proposes a novel method to visualise basins of attraction together with the
associated stationary points via gradient-based random sampling. The proposed
technique is used to perform an empirical study of the loss surfaces generated
by two different error metrics: quadratic loss and entropic loss. The empirical
observations confirm the theoretical hypothesis regarding the nature of neural
network attraction basins. Entropic loss is shown to exhibit stronger gradients
and fewer stationary points than quadratic loss, indicating that entropic loss
has a more searchable landscape. Quadratic loss is shown to be more resilient
to overfitting than entropic loss. Both losses are shown to exhibit local
minima, but the number of local minima is shown to decrease with an increase in
dimensionality. Thus, the proposed visualisation technique successfully
captures the local minima properties exhibited by the neural network loss
surfaces, and can be used for the purpose of fitness landscape analysis of
neural networks.Comment: Preprint submitted to the Neural Networks journa
Layerwise Linear Mode Connectivity
In the federated setup one performs an aggregation of separate local models
multiple times during training in order to obtain a stronger global model; most
often aggregation is a simple averaging of the parameters. Understanding when
and why averaging works in a non-convex setup, such as federated deep learning,
is an open challenge that hinders obtaining highly performant global models. On
i.i.d.~datasets federated deep learning with frequent averaging is successful.
The common understanding, however, is that during the independent training
models are drifting away from each other and thus averaging may not work
anymore after many local parameter updates. The problem can be seen from the
perspective of the loss surface: for points on a non-convex surface the average
can become arbitrarily bad. The assumption of local convexity, often used to
explain the success of federated averaging, contradicts to the empirical
evidence showing that high loss barriers exist between models from the very
beginning of the learning, even when training on the same data. Based on the
observation that the learning process evolves differently in different layers,
we investigate the barrier between models in a layerwise fashion. Our
conjecture is that barriers preventing from successful federated training are
caused by a particular layer or group of layers.Comment: HLD 2023: 1st Workshop on High-dimensional Learning Dynamics, ICML
2023, Hawaii, US