Article thumbnail

INFERENCE BASED ON RESAMPLING TECHNIQUES FOR NEURAL NETWORKS IN REGRESSION MODELS

By Michele La Rocca, Francesco Giordano and Cira Perna

Abstract

Let {Yt}, t=1,..., T be a time series generated according to the model: Yt=f(Xt)+et t=1, ..., T where f is a non linear continuous function, Xt = (X1t, X2t, ...,Xdt) is a vector of d non stochastic explanatory variables defined on a compact X belonging Rd , and {et} are zero mean random variables with constant variance. The function f in the previous model can be approximated with a feed-forward neural networks. Many authors (Hornik et al., 1989 inter alia) showed that, under general regularity conditions, a sufficiently complex single hidden layer feedforward network can approximate any member of a class of function to any degree of accuracy. Of course to consider them a statistical technique it is necessary for the parameter estimators to verify the properties of convergence in probability and in distribution. White (1989) using sthocastic approximations shows that, under some general assumptions, back-propagation (a recursive estimation procedure) yelds estimates that are strongly consistent and asymptotically normal distibuted. In a previous paper (Giordano and Perna, 1999) we proved, using an approach based on the theory of M estimators (Huber, 1981), the consistency of the neural estimators and derived their limiting gaussian distribution in the case of regression models both when the error term is iid that when it is fourth order stationary and phi-mixing. The proposed approach only requires specification of the functional form of the objective function gradient. This result makes possible to test hypotheses about the connection strengths. Unfortunately the complexity of the variance covariance expression makes difficult to deal with the limiting distribution and no use of the asymptotic result can be done for inferential purpose. In this paper we propose to use resampling techniques to get an alternative estimate of the sampling distribution of the neural estimators. The proposed approach has the advantage that no analytical derivation is required and an higher order accuracy with respect to the asymptotic Normal distribution can be obtained.

OAI identifier:
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://fmwww.bc.edu/cef00/pape... (external link)

  • To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.

    Suggested articles