1,477 research outputs found

    Artificial Neural Networks

    Get PDF
    Artificial neural networks (ANNs) constitute a class of flexible nonlinear models designed to mimic biological neural systems. In this entry, we introduce ANN using familiar econometric terminology and provide an overview of ANN modeling approach and its implementation methods.

    A Nonparametric Ensemble Binary Classifier and its Statistical Properties

    Full text link
    In this work, we propose an ensemble of classification trees (CT) and artificial neural networks (ANN). Several statistical properties including universal consistency and upper bound of an important parameter of the proposed classifier are shown. Numerical evidence is also provided using various real life data sets to assess the performance of the model. Our proposed nonparametric ensemble classifier doesn't suffer from the `curse of dimensionality' and can be used in a wide variety of feature selection cum classification problems. Performance of the proposed model is quite better when compared to many other state-of-the-art models used for similar situations

    Approximation paper, part 1

    Get PDF
    In this paper we discuss approximations between neural nets, fuzzy expert systems, fuzzy controllers, and continuous processes

    Elementary Derivative Tasks and Neural Net Multiscale Analysis of Tasks

    Full text link
    Neural nets are known to be universal approximators. In particular, formal neurons implementing wavelets have been shown to build nets able to approximate any multidimensional task. Such very specialized formal neurons may be, however, difficult to obtain biologically and/or industrially. In this paper we relax the constraint of a strict ``Fourier analysis'' of tasks. Rather, we use a finite number of more realistic formal neurons implementing elementary tasks such as ``window'' or ``Mexican hat'' responses, with adjustable widths. This is shown to provide a reasonably efficient, practical and robust, multifrequency analysis. A training algorithm, optimizing the task with respect to the widths of the responses, reveals two distinct training modes. The first mode induces some of the formal neurons to become identical, hence promotes ``derivative tasks''. The other mode keeps the formal neurons distinct.Comment: latex neurondlt.tex, 7 files, 6 figures, 9 pages [SPhT-T01/064], submitted to Phys. Rev.

    Universal Approximation Depth and Errors of Narrow Belief Networks with Discrete Units

    Full text link
    We generalize recent theoretical work on the minimal number of layers of narrow deep belief networks that can approximate any probability distribution on the states of their visible units arbitrarily well. We relax the setting of binary units (Sutskever and Hinton, 2008; Le Roux and Bengio, 2008, 2010; Mont\'ufar and Ay, 2011) to units with arbitrary finite state spaces, and the vanishing approximation error to an arbitrary approximation error tolerance. For example, we show that a qq-ary deep belief network with L2+qmδ1q1L\geq 2+\frac{q^{\lceil m-\delta \rceil}-1}{q-1} layers of width nm+logq(m)+1n \leq m + \log_q(m) + 1 for some mNm\in \mathbb{N} can approximate any probability distribution on {0,1,,q1}n\{0,1,\ldots,q-1\}^n without exceeding a Kullback-Leibler divergence of δ\delta. Our analysis covers discrete restricted Boltzmann machines and na\"ive Bayes models as special cases.Comment: 19 pages, 5 figures, 1 tabl
    corecore