Statistical Intervals for Neural Network and its Relationship with Generalized Linear Model

Abstract

Neural networks have experienced widespread adoption and have become integral in cutting-edge domains like computer vision, natural language processing, and various contemporary fields. However, addressing the statistical aspects of neural networks has been a persistent challenge, with limited satisfactory results. In my research, I focused on exploring statistical intervals applied to neural networks, specifically confidence intervals and tolerance intervals. I employed variance estimation methods, such as direct estimation and resampling, to assess neural networks and their performance under outlier scenarios. Remarkably, when outliers were present, the resampling method with infinitesimal jackknife estimation yielded confidence intervals that closely aligned with nominal levels. To consider neural networks as nonparametric regression models, I employed tolerance intervals and observed that the coverage of these intervals approached the nominal level. Additionally, I conducted a comparative study between neural networks and generalized linear models. The results indicated that neural networks did not outperform linear models in low-dimensional settings. However, in high-dimensional models or multitask classification, neural networks exhibited significantly superior performance. Lastly, I proposed further research exploring advanced techniques in neural networks, as well as investigating statistical attributes of various deep learning methods. These future studies hold the potential to expand our understanding of neural networks and enhance their statistical properties

    Similar works