15 research outputs found

    Generalization Error of Linear Neural Networks in Unidentifiable Cases

    No full text
    The statistical asymptotic theory is often used in many theoretical results in computational and statistical learning theory. It describes the limiting distribution of the maximum likelihood estimator as an normal distribution. However, in layered models such as neural networks, the regularity condition of the asymptotic theory is not necessarily satisfied. If the true function is realized by a smaller-sized network than the model, the target parameter is not identifiable because it consists of a union of high dimensional submanifolds. In such cases, the maximum likelihood estimator is not subject to the asymptotic theory. There has been little known on the behavior in these cases of neural networks. In this paper, we analyze the expectation of the generalization error of three-layer linear neural networks in asymptotic situations, and elucidate a strange behavior in unidentifiable cases. We show that the expectation of the generalization error in the unidentifiable cases is larger tha..

    Regularized Partial and/or Constrained Redundancy Analysis

    No full text
    reduced rank approximations, covariates, linear constraints, least squares estimation, ridge least squares estimation, generalized singular value decomposition (GSVD), G-fold cross validation, bootstrap method,
    corecore