4 research outputs found

    Encoding A Priori Information In Feedforward Networks

    No full text
    Theoretical results and practical experience indicate that feedforward networks are very good at approximating a wide class of functional relationships. Training networks to approximate functions takes place by using exemplars to find interconnect weights that maximize some goodness of fit criterion. Given finite data sets it can be important in the training process to take advantage of any a priori information regarding the underlying functional relationship to improve the approximation and the ability of the network to generalize. This paper describes methods for incorporating a priori information of this type into feedforward networks. Two general approaches, one based upon architectural constraints and a second upon connection weight constraints form the basis of the methods presented. These two approaches can be used either alone or in combination to help solve specific training problems. Several examples covering a variety of types of a priori information, including information a..

    Symmetry Constraints for Feedforward Network Models of Gradient Systems

    No full text
    This paper concerns the use of a priori information on the symmetry of cross differentials available for problems that seek to approximate the gradient of a differentiable function. We derive the appropriate network constraints to incorporate the symmetry information, show that the constraints do not reduce the universal approximation capabilities of feedforward networks, and demonstrate how the constraints can improve generalization. Keywords: A priori information, constrained training, feedforward networks. 1. Introduction Across a variety of fields researchers need to model systems of nonlinear differential equations, say \Psi (\Delta), derived as the gradient of some unknown nonlinear function, / (\Delta). For example, geological detection of mass anomalies depends on the gradient of a gravitational potential function. In this case one can only observe the gradient data, i.e. the data from \Psi (\Delta), from which the gravitational potential function, /(\Delta), must..

    FEEDFORWARD NEURAL NETWORK ESTIMATION OF A CROP YIELD RESPONSE FUNCTION

    No full text
    Feedforward networks have powerful approximation capabilities without the "explosion of parameters" problem faced by Fourier and polynomial expansions. This paper first introduces feedforward networks and describes their approximation capabilities, then we address several practical issues faced by applications of feedforward networks. First, we demonstrate networks can provide a reasonable estimate of a Bermudagrass hay fertilizer response function with the relatively sparse data often available from experiments. Second, we demonstrate that the estimated network with a practical number of hidden units provides reasonable flexibility. Third, we show how one can constrain feedforward networks to satisfy a priori information without losing their flexible functional form characteristic

    FEEDFORWARD NEURAL NETWORK ESTIMATION OF A CROP YIELD RESPONSE FUNCTION

    No full text
    Feedforward networks have powerful approximation capabilities without the "explosion of parameters" problem faced by Fourier and polynomial expansions. This paper first introduces feedforward networks and describes their approximation capabilities, then we address several practical issues faced by applications of feedforward networks. First, we demonstrate networks can provide a reasonable estimate of a Bermudagrass hay fertilizer response function with the relatively sparse data often available from experiments. Second, we demonstrate that the estimated network with a practical number of hidden units provides reasonable flexibility. Third, we show how one can constrain feedforward networks to satisfy a priori information without losing their flexible functional form characteristic.Biological process models, Feedforward networks, Production function, Neural networks, Research Methods/ Statistical Methods,
    corecore