2,434 research outputs found
Capacity Control of ReLU Neural Networks by Basis-path Norm
Recently, path norm was proposed as a new capacity measure for neural
networks with Rectified Linear Unit (ReLU) activation function, which takes the
rescaling-invariant property of ReLU into account. It has been shown that the
generalization error bound in terms of the path norm explains the empirical
generalization behaviors of the ReLU neural networks better than that of other
capacity measures. Moreover, optimization algorithms which take path norm as
the regularization term to the loss function, like Path-SGD, have been shown to
achieve better generalization performance. However, the path norm counts the
values of all paths, and hence the capacity measure based on path norm could be
improperly influenced by the dependency among different paths. It is also known
that each path of a ReLU network can be represented by a small group of
linearly independent basis paths with multiplication and division operation,
which indicates that the generalization behavior of the network only depends on
only a few basis paths. Motivated by this, we propose a new norm
\emph{Basis-path Norm} based on a group of linearly independent paths to
measure the capacity of neural networks more accurately. We establish a
generalization error bound based on this basis path norm, and show it explains
the generalization behaviors of ReLU networks more accurately than previous
capacity measures via extensive experiments. In addition, we develop
optimization algorithms which minimize the empirical risk regularized by the
basis-path norm. Our experiments on benchmark datasets demonstrate that the
proposed regularization method achieves clearly better performance on the test
set than the previous regularization approaches
- …