28 research outputs found
Improving Randomized Learning of Feedforward Neural Networks by Appropriate Generation of Random Parameters
In this work, a method of random parameters generation for randomized
learning of a single-hidden-layer feedforward neural network is proposed. The
method firstly, randomly selects the slope angles of the hidden neurons
activation functions from an interval adjusted to the target function, then
randomly rotates the activation functions, and finally distributes them across
the input space. For complex target functions the proposed method gives better
results than the approach commonly used in practice, where the random
parameters are selected from the fixed interval. This is because it introduces
the steepest fragments of the activation functions into the input hypercube,
avoiding their saturation fragments
Deep Randomized Neural Networks
Randomized Neural Networks explore the behavior of neural systems where the
majority of connections are fixed, either in a stochastic or a deterministic
fashion. Typical examples of such systems consist of multi-layered neural
network architectures where the connections to the hidden layer(s) are left
untrained after initialization. Limiting the training algorithms to operate on
a reduced set of weights inherently characterizes the class of Randomized
Neural Networks with a number of intriguing features. Among them, the extreme
efficiency of the resulting learning processes is undoubtedly a striking
advantage with respect to fully trained architectures. Besides, despite the
involved simplifications, randomized neural systems possess remarkable
properties both in practice, achieving state-of-the-art results in multiple
domains, and theoretically, allowing to analyze intrinsic properties of neural
architectures (e.g. before training of the hidden layers' connections). In
recent years, the study of Randomized Neural Networks has been extended towards
deep architectures, opening new research directions to the design of effective
yet extremely efficient deep learning models in vectorial as well as in more
complex data domains. This chapter surveys all the major aspects regarding the
design and analysis of Randomized Neural Networks, and some of the key results
with respect to their approximation capabilities. In particular, we first
introduce the fundamentals of randomized neural models in the context of
feed-forward networks (i.e., Random Vector Functional Link and equivalent
models) and convolutional filters, before moving to the case of recurrent
systems (i.e., Reservoir Computing networks). For both, we focus specifically
on recent results in the domain of deep randomized systems, and (for recurrent
models) their application to structured domains
Deep Randomized Neural Networks
Randomized Neural Networks explore the behavior of neural systems where the majority of connections are fixed, either in a stochastic or a deterministic fashion. Typical examples of such systems consist of multi-layered neural network architectures where the connections to the hidden layer(s) are left untrained after initialization. Limiting the training algorithms to operate on a reduced set of weights inherently characterizes the class of Randomized Neural Networks with a number of intriguing features. Among them, the extreme efficiency of the resulting learning processes is undoubtedly a striking advantage with respect to fully trained architectures. Besides, despite the involved simplifications, randomized neural systems possess remarkable properties both in practice, achieving state-of-the-art results in multiple domains, and theoretically, allowing to analyze intrinsic properties of neural architectures (e.g. before training of the hidden layers’ connections). In recent years, the study of Randomized Neural Networks has been extended towards deep architectures, opening new research directions to the design of effective yet extremely efficient deep learning models in vectorial as well as in more complex data domains. This chapter surveys all the major aspects regarding the design and analysis of Randomized Neural Networks, and some of the key results with respect to their approximation capabilities. In particular, we first introduce the fundamentals of randomized neural models in the context of feed-forward networks (i.e., Random Vector Functional Link and equivalent models) and convolutional filters, before moving to the case of recurrent systems (i.e., Reservoir Computing networks). For both, we focus specifically on recent results in the domain of deep randomized systems, and (for recurrent models) their application to structured domains