1 research outputs found
Orthogonal and Idempotent Transformations for Learning Deep Neural Networks
Identity transformations, used as skip-connections in residual networks,
directly connect convolutional layers close to the input and those close to the
output in deep neural networks, improving information flow and thus easing the
training. In this paper, we introduce two alternative linear transforms,
orthogonal transformation and idempotent transformation. According to the
definition and property of orthogonal and idempotent matrices, the product of
multiple orthogonal (same idempotent) matrices, used to form linear
transformations, is equal to a single orthogonal (idempotent) matrix, resulting
in that information flow is improved and the training is eased. One interesting
point is that the success essentially stems from feature reuse and gradient
reuse in forward and backward propagation for maintaining the information
during flow and eliminating the gradient vanishing problem because of the
express way through skip-connections. We empirically demonstrate the
effectiveness of the proposed two transformations: similar performance in
single-branch networks and even superior in multi-branch networks in comparison
to identity transformations