research

The role of weight normalization in competitive learning

Abstract

The effect of different kinds of weight normalization on the outcome of a simple competitive learning rule is analyzed. It is shown that there are important differences in the representation formed depending on whether the constraint is enforced by dividing each weight by the same amount (''divisive enforcement'') or subtracting a fixed amount from each weight (''subtractive enforcement''). For the divisive cases weight vectors spread out over the space so as to evenly represent ''typical'' inputs, whereas for the subtractive cases the weight vectors tend to the axes of the space, so as to represent ''extreme'' inputs. The consequences of these differences are examined

    Similar works