30 research outputs found

    A Theory of Networks for Appxoimation and Learning

    Get PDF
    Learning an input-output mapping from a set of examples, of the type that many neural networks have been constructed to perform, can be regarded as synthesizing an approximation of a multi-dimensional function, that is solving the problem of hypersurface reconstruction. From this point of view, this form of learning is closely related to classical approximation techniques, such as generalized splines and regularization theory. This paper considers the problems of an exact representation and, in more detail, of the approximation of linear and nolinear mappings in terms of simpler functions of fewer variables. Kolmogorov's theorem concerning the representation of functions of several variables in terms of functions of one variable turns out to be almost irrelevant in the context of networks for learning. We develop a theoretical framework for approximation based on regularization techniques that leads to a class of three-layer networks that we call Generalized Radial Basis Functions (GRBF), since they are mathematically related to the well-known Radial Basis Functions, mainly used for strict interpolation tasks. GRBF networks are not only equivalent to generalized splines, but are also closely related to pattern recognition methods such as Parzen windows and potential functions and to several neural network algorithms, such as Kanerva's associative memory, backpropagation and Kohonen's topology preserving map. They also have an interesting interpretation in terms of prototypes that are synthesized and optimally combined during the learning stage. The paper introduces several extensions and applications of the technique and discusses intriguing analogies with neurobiological data

    09391 Abstracts Collection -- Algorithms and Complexity for Continuous Problems

    Get PDF
    From 20.09.09 to 25.09.09, the Dagstuhl Seminar 09391 Algorithms and Complexity for Continuous Problems was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Solving Multiple Objective Programming Problems Using Feed-Forward Artificial Neural Networks: The Interactive FFANN Procedure

    Get PDF
    In this paper, we propose a new interactive procedure for solving multiple objective programming problems. Based upon feed-forward artificial neural networks (FFANNs), the method is called the Interactive FFANN Procedure. In the procedure, the decision maker articulates preference information over representative samples from the nondominated set either by assigning preference "values" to the sample solutions or by making pairwise comparisons in a fashion similar to that in the Analytic Hierarchy Process. With this information, a FFANN is trained to represent the decision maker's preference structure. Then, using the FFANN, an optimization problem is solved to search for improved solutions. An example is given to illustrate the Interactive FFANN Procedure. Also, the procedure is compared computationally with the Tchebycheff Method (Steuer and Choo 1983). From the computational results, the Interactive FFANN Procedure produces good results and is robust with regard to the neural network architecture
    corecore