19 research outputs found
Generalization Error of Linear Neural Networks in Unidentifiable Cases
The statistical asymptotic theory is often used in many theoretical results in computational and statistical learning theory. It describes the limiting distribution of the maximum likelihood estimator as an normal distribution. However, in layered models such as neural networks, the regularity condition of the asymptotic theory is not necessarily satisfied. If the true function is realized by a smaller-sized network than the model, the target parameter is not identifiable because it consists of a union of high dimensional submanifolds. In such cases, the maximum likelihood estimator is not subject to the asymptotic theory. There has been little known on the behavior in these cases of neural networks. In this paper, we analyze the expectation of the generalization error of three-layer linear neural networks in asymptotic situations, and elucidate a strange behavior in unidentifiable cases. We show that the expectation of the generalization error in the unidentifiable cases is larger tha..
Analyzing the Sustainability of India's Current Account Position Following the Reforms of the Early 1990s
Sources of Exchange Rate and Price Level Fluctuations in Two Commodity Exporting Countries: Australia and New Zealand
Mean square prediction error for long-memory processes
Fractional ARIMA(p,d,q) processes, long-range dependence, longrange forecasting, mean square prediction error, misspecification,
On a test of dimensionality in redundancy analysis
Reduced rank regression, PCA of instrumental variables, parallel analysis, permutation tests,