997 research outputs found
A Winner-Take-All Approach to Emotional Neural Networks with Universal Approximation Property
Here, we propose a brain-inspired winner-take-all emotional neural network
(WTAENN) and prove the universal approximation property for the novel
architecture. WTAENN is a single layered feedforward neural network that
benefits from the excitatory, inhibitory, and expandatory neural connections as
well as the winner-take-all (WTA) competitions in the human brain s nervous
system. The WTA competition increases the information capacity of the model
without adding hidden neurons. The universal approximation capability of the
proposed architecture is illustrated on two example functions, trained by a
genetic algorithm, and then applied to several competing recent and benchmark
problems such as in curve fitting, pattern recognition, classification and
prediction. In particular, it is tested on twelve UCI classification datasets,
a facial recognition problem, three real world prediction problems (2 chaotic
time series of geomagnetic activity indices and wind farm power generation
data), two synthetic case studies with constant and nonconstant noise variance
as well as k-selector and linear programming problems. Results indicate the
general applicability and often superiority of the approach in terms of higher
accuracy and lower model complexity, especially where low computational
complexity is imperative.Comment: Information Sciences (2015), Elsevier Publishe
Approximation of discontinuous signals by sampling Kantorovich series
In this paper, the behavior of the sampling Kantorovich operators has been
studied, when discontinuous signals are considered in the above sampling
series. Moreover, the rate of approximation for the family of the above
operators is estimated, when uniformly continuous and bounded signals are
considered. Further, also the problem of the linear prediction by sampling
values from the past is analyzed. At the end, the role of duration-limited
kernels in the previous approximation processes has been treated, and several
examples have been provided.Comment: 22 page
Approximation by Exponential Type Neural Network Operators
In the present article, we introduce and study the behaviour of the new
family of exponential type neural network operators activated by the sigmoidal
functions. We establish the point-wise and uniform approximation theorems for
these NN (Neural Network) operators in C[a; b]: Further, the quantitative
estimates of order of approximation for the proposed NN operators in C(N)[a; b]
are established in terms of the modulus of continuity. We also analyze the
behaviour of the family of exponential type quasi-interpolation operators in
C(R+): Finally, we discuss the multivariate extension of these NN operators and
some examples of the sigmoidal functions
On Sharpness of Error Bounds for Single Hidden Layer Feedforward Neural Networks
A new non-linear variant of a quantitative extension of the uniform
boundedness principle is used to show sharpness of error bounds for univariate
approximation by sums of sigmoid and ReLU functions. Single hidden layer
feedforward neural networks with one input node perform such operations. Errors
of best approximation can be expressed using moduli of smoothness of the
function to be approximated (i.e., to be learned). In this context, the
quantitative extension of the uniform boundedness principle indeed allows to
construct counter examples that show approximation rates to be best possible.
Approximation errors do not belong to the little-o class of given bounds. By
choosing piecewise linear activation functions, the discussed problem becomes
free knot spline approximation. Results of the present paper also hold for
non-polynomial (and not piecewise defined) activation functions like inverse
tangent. Based on Vapnik-Chervonenkis dimension, first results are shown for
the logistic function.Comment: pre-print of paper accepted by Results in Mathematic
Rate of approximation for multivariate sampling Kantorovich operators on some functions spaces
In this paper, the problem of the order of approximation for the multivariate
sampling Kantorovich operators is studied. The cases of the uniform
approximation for uniformly continuous and bounded functions/signals belonging
to Lipschitz classes and the case of the modular approximation for functions in
Orlicz spaces are considered. In the latter context, Lipschitz classes of
Zygmund-type which take into account of the modular functional involved are
introduced. Applications to Lp(R^n), interpolation and exponential spaces can
be deduced from the general theory formulated in the setting of Orlicz spaces.
The special cases of multivariate sampling Kantorovich operators based on
kernels of the product type and constructed by means of Fejer's and B-spline
kernels have been studied in details.Comment: 22 page
Approximation results in Orlicz spaces for sequences of Kantorovich max-product neural network operators
In this paper we study the theory of the so-called Kantorovich max-product
neural network operators in the setting of Orlicz spaces . The
results here proved, extend those given by Costarelli and Vinti in Result
Math., 2016, to a more general context. The main advantage in studying neural
network type operators in Orlicz spaces relies in the possibility to
approximate not necessarily continuous functions (data) belonging to different
function spaces by a unique general approach. Further, in order to derive
quantitative estimates in this context, we introduce a suitable K-functional in
and use it to provide an upper bound for the approximation error
of the above operators. Finally, examples of sigmoidal activation functions
have been considered and studied in details.Comment: 17 page
Convergence in Orlicz spaces by means of the multivariate max-product neural network operators of the Kantorovich type and applications
In this paper, convergence results in a multivariate setting have been proved
for a family of neural network operators of the max-product type. In
particular, the coefficients expressed by Kantorovich type means allow to treat
the theory in the general frame of the Orlicz spaces, which includes as
particular case the -spaces. Examples of sigmoidal activation functions
are discussed, for the above operators in different cases of Orlicz spaces.
Finally, concrete applications to real world cases have been presented in both
uni-variate and multivariate settings. In particular, the case of
reconstruction and enhancement of biomedical (vascular) image has been
discussed in details.Comment: 19 page
Function approximation with zonal function networks with activation functions analogous to the rectified linear unit functions
A zonal function (ZF) network on the dimensional sphere is
a network of the form where is the
activation function, are the centers, and
. While the approximation properties of such networks are
well studied in the context of positive definite activation functions, recent
interest in deep and shallow networks motivate the study of activation
functions of the form , which are not positive definite. In this
paper, we define an appropriate smoothess class and establish approximation
properties of such networks for functions in this class. The centers can be
chosen independently of the target function, and the coefficients are linear
combinations of the training data. The constructions preserve rotational
symmetries.Comment: 18 pages, Title changed from the pervious versio
Nonlinear Approximation via Compositions
Given a function dictionary and an approximation budget
, nonlinear approximation seeks the linear combination of the
best terms to approximate a given
function with the minimum approximation
errorMotivated by recent success of deep
learning, we propose dictionaries with functions in a form of compositions,
i.e.,for all
, and implement using ReLU feed-forward neural networks (FNNs)
with hidden layers. We further quantify the improvement of the best
-term approximation rate in terms of when is increased from to
or to show the power of compositions. In the case when , our
analysis shows that increasing cannot improve the approximation rate in
terms of .
In particular, for any function on , regardless of its smoothness
and even the continuity, if can be approximated using a dictionary when
with the best -term approximation rate , we show that dictionaries with can improve the best
-term approximation rate to . We
also show that for H\"older continuous functions of order on
, the application of a dictionary with in nonlinear
approximation can achieve an essentially tight best -term approximation rate
. Finally, we show that
dictionaries consisting of wide FNNs with a few hidden layers are more
attractive in terms of computational efficiency than dictionaries with narrow
and very deep FNNs for approximating H\"older continuous functions if the
number of computer cores is larger than in parallel computing
A theoretical study of the role of astrocyte activity in neuronal hyperexcitability using a new neuro-glial mass model
The investigation of the neuronal environment allows us to better understand
the activity of a cerebral region as a whole. The recent experimental evidences
of the presence of transporters for glutamate and GABA in both neuronal and
astrocyte compartments raise the question of the functional importance of the
astrocytes in the regulation of the neuronal activity. We propose a new
computational model at the mesoscopic scale embedding the recent knowledge on
the physiology of neuron and astrocyte coupled activities. The neural
compartment is a neural mass model with double excitatory feedback, and the
glial compartment focus on the dynamics of glutamate and GABA concentrations.
Using the proposed model, we first study the impact of a deficiency in the
reuptake of GABA by astrocytes, which implies an increase in GABA concentration
in the extracellular space. A decrease in the frequency of neural activity is
observed and explained from the dynamics analysis. Second, we investigate the
neuronal response to a deficiency in the reuptake of Glutamate by the
astrocytes. In this case, we identify three behaviors : the neural activity may
either be reduced, or enhanced or, alternatively, may experience a transient of
high activity before stabilizing around a new activity regime with a frequency
close to the nominal one. After translating theoretically the neuronal
excitability modulation using the bifurcation structure of the neural mass
model, we state the conditions on the glial feedback parameters corresponding
to each behavior.Comment: 22 pages, 11 figures, article preprin
- …