1,084 research outputs found
Deep nets for local manifold learning
The problem of extending a function defined on a training data
on an unknown manifold to the entire manifold and a
tubular neighborhood of this manifold is considered in this paper. For
embedded in a high dimensional ambient Euclidean space
, a deep learning algorithm is developed for finding a local
coordinate system for the manifold {\bf without eigen--decomposition}, which
reduces the problem to the classical problem of function approximation on a low
dimensional cube. Deep nets (or multilayered neural networks) are proposed to
accomplish this approximation scheme by using the training data. Our methods do
not involve such optimization techniques as back--propagation, while assuring
optimal (a priori) error bounds on the output in terms of the number of
derivatives of the target function. In addition, these methods are universal,
in that they do not require a prior knowledge of the smoothness of the target
function, but adjust the accuracy of approximation locally and automatically,
depending only upon the local smoothness of the target function. Our ideas are
easily extended to solve both the pre--image problem and the out--of--sample
extension problem, with a priori bounds on the growth of the function thus
extended.Comment: Submitted on Sept. 17, 201
A Winner-Take-All Approach to Emotional Neural Networks with Universal Approximation Property
Here, we propose a brain-inspired winner-take-all emotional neural network
(WTAENN) and prove the universal approximation property for the novel
architecture. WTAENN is a single layered feedforward neural network that
benefits from the excitatory, inhibitory, and expandatory neural connections as
well as the winner-take-all (WTA) competitions in the human brain s nervous
system. The WTA competition increases the information capacity of the model
without adding hidden neurons. The universal approximation capability of the
proposed architecture is illustrated on two example functions, trained by a
genetic algorithm, and then applied to several competing recent and benchmark
problems such as in curve fitting, pattern recognition, classification and
prediction. In particular, it is tested on twelve UCI classification datasets,
a facial recognition problem, three real world prediction problems (2 chaotic
time series of geomagnetic activity indices and wind farm power generation
data), two synthetic case studies with constant and nonconstant noise variance
as well as k-selector and linear programming problems. Results indicate the
general applicability and often superiority of the approach in terms of higher
accuracy and lower model complexity, especially where low computational
complexity is imperative.Comment: Information Sciences (2015), Elsevier Publishe
An Explicit Neural Network Construction for Piecewise Constant Function Approximation
We present an explicit construction for feedforward neural network (FNN),
which provides a piecewise constant approximation for multivariate functions.
The proposed FNN has two hidden layers, where the weights and thresholds are
explicitly defined and do not require numerical optimization for training.
Unlike most of the existing work on explicit FNN construction, the proposed FNN
does not rely on tensor structure in multiple dimensions. Instead, it
automatically creates Voronoi tessellation of the domain, based on the given
data of the target function, and piecewise constant approximation of the
function. This makes the construction more practical for applications. We
present both theoretical analysis and numerical examples to demonstrate its
properties
Approximation of discontinuous signals by sampling Kantorovich series
In this paper, the behavior of the sampling Kantorovich operators has been
studied, when discontinuous signals are considered in the above sampling
series. Moreover, the rate of approximation for the family of the above
operators is estimated, when uniformly continuous and bounded signals are
considered. Further, also the problem of the linear prediction by sampling
values from the past is analyzed. At the end, the role of duration-limited
kernels in the previous approximation processes has been treated, and several
examples have been provided.Comment: 22 page
Learning Topology and Dynamics of Large Recurrent Neural Networks
Large-scale recurrent networks have drawn increasing attention recently
because of their capabilities in modeling a large variety of real-world
phenomena and physical mechanisms. This paper studies how to identify all
authentic connections and estimate system parameters of a recurrent network,
given a sequence of node observations. This task becomes extremely challenging
in modern network applications, because the available observations are usually
very noisy and limited, and the associated dynamical system is strongly
nonlinear. By formulating the problem as multivariate sparse sigmoidal
regression, we develop simple-to-implement network learning algorithms, with
rigorous convergence guarantee in theory, for a variety of sparsity-promoting
penalty forms. A quantile variant of progressive recurrent network screening is
proposed for efficient computation and allows for direct cardinality control of
network topology in estimation. Moreover, we investigate recurrent network
stability conditions in Lyapunov's sense, and integrate such stability
constraints into sparse network learning. Experiments show excellent
performance of the proposed algorithms in network topology identification and
forecasting
Approximation by Exponential Type Neural Network Operators
In the present article, we introduce and study the behaviour of the new
family of exponential type neural network operators activated by the sigmoidal
functions. We establish the point-wise and uniform approximation theorems for
these NN (Neural Network) operators in C[a; b]: Further, the quantitative
estimates of order of approximation for the proposed NN operators in C(N)[a; b]
are established in terms of the modulus of continuity. We also analyze the
behaviour of the family of exponential type quasi-interpolation operators in
C(R+): Finally, we discuss the multivariate extension of these NN operators and
some examples of the sigmoidal functions
Convergence in Orlicz spaces by means of the multivariate max-product neural network operators of the Kantorovich type and applications
In this paper, convergence results in a multivariate setting have been proved
for a family of neural network operators of the max-product type. In
particular, the coefficients expressed by Kantorovich type means allow to treat
the theory in the general frame of the Orlicz spaces, which includes as
particular case the -spaces. Examples of sigmoidal activation functions
are discussed, for the above operators in different cases of Orlicz spaces.
Finally, concrete applications to real world cases have been presented in both
uni-variate and multivariate settings. In particular, the case of
reconstruction and enhancement of biomedical (vascular) image has been
discussed in details.Comment: 19 page
On Sharpness of Error Bounds for Single Hidden Layer Feedforward Neural Networks
A new non-linear variant of a quantitative extension of the uniform
boundedness principle is used to show sharpness of error bounds for univariate
approximation by sums of sigmoid and ReLU functions. Single hidden layer
feedforward neural networks with one input node perform such operations. Errors
of best approximation can be expressed using moduli of smoothness of the
function to be approximated (i.e., to be learned). In this context, the
quantitative extension of the uniform boundedness principle indeed allows to
construct counter examples that show approximation rates to be best possible.
Approximation errors do not belong to the little-o class of given bounds. By
choosing piecewise linear activation functions, the discussed problem becomes
free knot spline approximation. Results of the present paper also hold for
non-polynomial (and not piecewise defined) activation functions like inverse
tangent. Based on Vapnik-Chervonenkis dimension, first results are shown for
the logistic function.Comment: pre-print of paper accepted by Results in Mathematic
Rate of approximation for multivariate sampling Kantorovich operators on some functions spaces
In this paper, the problem of the order of approximation for the multivariate
sampling Kantorovich operators is studied. The cases of the uniform
approximation for uniformly continuous and bounded functions/signals belonging
to Lipschitz classes and the case of the modular approximation for functions in
Orlicz spaces are considered. In the latter context, Lipschitz classes of
Zygmund-type which take into account of the modular functional involved are
introduced. Applications to Lp(R^n), interpolation and exponential spaces can
be deduced from the general theory formulated in the setting of Orlicz spaces.
The special cases of multivariate sampling Kantorovich operators based on
kernels of the product type and constructed by means of Fejer's and B-spline
kernels have been studied in details.Comment: 22 page
Function approximation with zonal function networks with activation functions analogous to the rectified linear unit functions
A zonal function (ZF) network on the dimensional sphere is
a network of the form where is the
activation function, are the centers, and
. While the approximation properties of such networks are
well studied in the context of positive definite activation functions, recent
interest in deep and shallow networks motivate the study of activation
functions of the form , which are not positive definite. In this
paper, we define an appropriate smoothess class and establish approximation
properties of such networks for functions in this class. The centers can be
chosen independently of the target function, and the coefficients are linear
combinations of the training data. The constructions preserve rotational
symmetries.Comment: 18 pages, Title changed from the pervious versio
- β¦