997 research outputs found

    A Winner-Take-All Approach to Emotional Neural Networks with Universal Approximation Property

    Full text link
    Here, we propose a brain-inspired winner-take-all emotional neural network (WTAENN) and prove the universal approximation property for the novel architecture. WTAENN is a single layered feedforward neural network that benefits from the excitatory, inhibitory, and expandatory neural connections as well as the winner-take-all (WTA) competitions in the human brain s nervous system. The WTA competition increases the information capacity of the model without adding hidden neurons. The universal approximation capability of the proposed architecture is illustrated on two example functions, trained by a genetic algorithm, and then applied to several competing recent and benchmark problems such as in curve fitting, pattern recognition, classification and prediction. In particular, it is tested on twelve UCI classification datasets, a facial recognition problem, three real world prediction problems (2 chaotic time series of geomagnetic activity indices and wind farm power generation data), two synthetic case studies with constant and nonconstant noise variance as well as k-selector and linear programming problems. Results indicate the general applicability and often superiority of the approach in terms of higher accuracy and lower model complexity, especially where low computational complexity is imperative.Comment: Information Sciences (2015), Elsevier Publishe

    Approximation of discontinuous signals by sampling Kantorovich series

    Full text link
    In this paper, the behavior of the sampling Kantorovich operators has been studied, when discontinuous signals are considered in the above sampling series. Moreover, the rate of approximation for the family of the above operators is estimated, when uniformly continuous and bounded signals are considered. Further, also the problem of the linear prediction by sampling values from the past is analyzed. At the end, the role of duration-limited kernels in the previous approximation processes has been treated, and several examples have been provided.Comment: 22 page

    Approximation by Exponential Type Neural Network Operators

    Full text link
    In the present article, we introduce and study the behaviour of the new family of exponential type neural network operators activated by the sigmoidal functions. We establish the point-wise and uniform approximation theorems for these NN (Neural Network) operators in C[a; b]: Further, the quantitative estimates of order of approximation for the proposed NN operators in C(N)[a; b] are established in terms of the modulus of continuity. We also analyze the behaviour of the family of exponential type quasi-interpolation operators in C(R+): Finally, we discuss the multivariate extension of these NN operators and some examples of the sigmoidal functions

    On Sharpness of Error Bounds for Single Hidden Layer Feedforward Neural Networks

    Full text link
    A new non-linear variant of a quantitative extension of the uniform boundedness principle is used to show sharpness of error bounds for univariate approximation by sums of sigmoid and ReLU functions. Single hidden layer feedforward neural networks with one input node perform such operations. Errors of best approximation can be expressed using moduli of smoothness of the function to be approximated (i.e., to be learned). In this context, the quantitative extension of the uniform boundedness principle indeed allows to construct counter examples that show approximation rates to be best possible. Approximation errors do not belong to the little-o class of given bounds. By choosing piecewise linear activation functions, the discussed problem becomes free knot spline approximation. Results of the present paper also hold for non-polynomial (and not piecewise defined) activation functions like inverse tangent. Based on Vapnik-Chervonenkis dimension, first results are shown for the logistic function.Comment: pre-print of paper accepted by Results in Mathematic

    Rate of approximation for multivariate sampling Kantorovich operators on some functions spaces

    Full text link
    In this paper, the problem of the order of approximation for the multivariate sampling Kantorovich operators is studied. The cases of the uniform approximation for uniformly continuous and bounded functions/signals belonging to Lipschitz classes and the case of the modular approximation for functions in Orlicz spaces are considered. In the latter context, Lipschitz classes of Zygmund-type which take into account of the modular functional involved are introduced. Applications to Lp(R^n), interpolation and exponential spaces can be deduced from the general theory formulated in the setting of Orlicz spaces. The special cases of multivariate sampling Kantorovich operators based on kernels of the product type and constructed by means of Fejer's and B-spline kernels have been studied in details.Comment: 22 page

    Approximation results in Orlicz spaces for sequences of Kantorovich max-product neural network operators

    Full text link
    In this paper we study the theory of the so-called Kantorovich max-product neural network operators in the setting of Orlicz spaces LφL^{\varphi}. The results here proved, extend those given by Costarelli and Vinti in Result Math., 2016, to a more general context. The main advantage in studying neural network type operators in Orlicz spaces relies in the possibility to approximate not necessarily continuous functions (data) belonging to different function spaces by a unique general approach. Further, in order to derive quantitative estimates in this context, we introduce a suitable K-functional in LφL^{\varphi} and use it to provide an upper bound for the approximation error of the above operators. Finally, examples of sigmoidal activation functions have been considered and studied in details.Comment: 17 page

    Convergence in Orlicz spaces by means of the multivariate max-product neural network operators of the Kantorovich type and applications

    Full text link
    In this paper, convergence results in a multivariate setting have been proved for a family of neural network operators of the max-product type. In particular, the coefficients expressed by Kantorovich type means allow to treat the theory in the general frame of the Orlicz spaces, which includes as particular case the LpL^p-spaces. Examples of sigmoidal activation functions are discussed, for the above operators in different cases of Orlicz spaces. Finally, concrete applications to real world cases have been presented in both uni-variate and multivariate settings. In particular, the case of reconstruction and enhancement of biomedical (vascular) image has been discussed in details.Comment: 19 page

    Function approximation with zonal function networks with activation functions analogous to the rectified linear unit functions

    Full text link
    A zonal function (ZF) network on the qq dimensional sphere Sq\mathbb{S}^q is a network of the form xk=1nakϕ(xxk)\mathbf{x}\mapsto \sum_{k=1}^n a_k\phi(\mathbf{x}\cdot\mathbf{x}_k) where ϕ:[1,1]R\phi :[-1,1]\to\mathbf{R} is the activation function, xkSq\mathbf{x}_k\in\mathbb{S}^q are the centers, and akRa_k\in\mathbb{R}. While the approximation properties of such networks are well studied in the context of positive definite activation functions, recent interest in deep and shallow networks motivate the study of activation functions of the form ϕ(t)=t\phi(t)=|t|, which are not positive definite. In this paper, we define an appropriate smoothess class and establish approximation properties of such networks for functions in this class. The centers can be chosen independently of the target function, and the coefficients are linear combinations of the training data. The constructions preserve rotational symmetries.Comment: 18 pages, Title changed from the pervious versio

    Nonlinear Approximation via Compositions

    Full text link
    Given a function dictionary D\cal D and an approximation budget NN+N\in\mathbb{N}^+, nonlinear approximation seeks the linear combination of the best NN terms {Tn}1nND\{T_n\}_{1\le n\le N}\subseteq{\cal D} to approximate a given function ff with the minimum approximation errorεL,f:=min{gn}R,{Tn}Df(x)n=1NgnTn(x).\varepsilon_{L,f}:=\min_{\{g_n\}\subseteq{\mathbb{R}},\{T_n\}\subseteq{\cal D}}\|f(x)-\sum_{n=1}^N g_n T_n(x)\|.Motivated by recent success of deep learning, we propose dictionaries with functions in a form of compositions, i.e.,T(x)=T(L)T(L1)T(1)(x)T(x)=T^{(L)}\circ T^{(L-1)}\circ\cdots\circ T^{(1)}(x)for all TDT\in\cal D, and implement TT using ReLU feed-forward neural networks (FNNs) with LL hidden layers. We further quantify the improvement of the best NN-term approximation rate in terms of NN when LL is increased from 11 to 22 or 33 to show the power of compositions. In the case when L>3L>3, our analysis shows that increasing LL cannot improve the approximation rate in terms of NN. In particular, for any function ff on [0,1][0,1], regardless of its smoothness and even the continuity, if ff can be approximated using a dictionary when L=1L=1 with the best NN-term approximation rate εL,f=O(Nη)\varepsilon_{L,f}={\cal O}(N^{-\eta}), we show that dictionaries with L=2L=2 can improve the best NN-term approximation rate to εL,f=O(N2η)\varepsilon_{L,f}={\cal O}(N^{-2\eta}). We also show that for H\"older continuous functions of order α\alpha on [0,1]d[0,1]^d, the application of a dictionary with L=3L=3 in nonlinear approximation can achieve an essentially tight best NN-term approximation rate εL,f=O(N2α/d)\varepsilon_{L,f}={\cal O}(N^{-2\alpha/d}). Finally, we show that dictionaries consisting of wide FNNs with a few hidden layers are more attractive in terms of computational efficiency than dictionaries with narrow and very deep FNNs for approximating H\"older continuous functions if the number of computer cores is larger than NN in parallel computing

    A theoretical study of the role of astrocyte activity in neuronal hyperexcitability using a new neuro-glial mass model

    Full text link
    The investigation of the neuronal environment allows us to better understand the activity of a cerebral region as a whole. The recent experimental evidences of the presence of transporters for glutamate and GABA in both neuronal and astrocyte compartments raise the question of the functional importance of the astrocytes in the regulation of the neuronal activity. We propose a new computational model at the mesoscopic scale embedding the recent knowledge on the physiology of neuron and astrocyte coupled activities. The neural compartment is a neural mass model with double excitatory feedback, and the glial compartment focus on the dynamics of glutamate and GABA concentrations. Using the proposed model, we first study the impact of a deficiency in the reuptake of GABA by astrocytes, which implies an increase in GABA concentration in the extracellular space. A decrease in the frequency of neural activity is observed and explained from the dynamics analysis. Second, we investigate the neuronal response to a deficiency in the reuptake of Glutamate by the astrocytes. In this case, we identify three behaviors : the neural activity may either be reduced, or enhanced or, alternatively, may experience a transient of high activity before stabilizing around a new activity regime with a frequency close to the nominal one. After translating theoretically the neuronal excitability modulation using the bifurcation structure of the neural mass model, we state the conditions on the glial feedback parameters corresponding to each behavior.Comment: 22 pages, 11 figures, article preprin
    corecore