7,717 research outputs found
Metaheuristic design of feedforward neural networks: a review of two decades of research
Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era
A Novel Progressive Multi-label Classifier for Classincremental Data
In this paper, a progressive learning algorithm for multi-label
classification to learn new labels while retaining the knowledge of previous
labels is designed. New output neurons corresponding to new labels are added
and the neural network connections and parameters are automatically
restructured as if the label has been introduced from the beginning. This work
is the first of the kind in multi-label classifier for class-incremental
learning. It is useful for real-world applications such as robotics where
streaming data are available and the number of labels is often unknown. Based
on the Extreme Learning Machine framework, a novel universal classifier with
plug and play capabilities for progressive multi-label classification is
developed. Experimental results on various benchmark synthetic and real
datasets validate the efficiency and effectiveness of our proposed algorithm.Comment: 5 pages, 3 figures, 4 table
An Extreme Learning Machine-Based Method for Computational PDEs in Higher Dimensions
We present two effective methods for solving high-dimensional partial
differential equations (PDE) based on randomized neural networks. Motivated by
the universal approximation property of this type of networks, both methods
extend the extreme learning machine (ELM) approach from low to high dimensions.
With the first method the unknown solution field in dimensions is
represented by a randomized feed-forward neural network, in which the
hidden-layer parameters are randomly assigned and fixed while the output-layer
parameters are trained. The PDE and the boundary/initial conditions, as well as
the continuity conditions (for the local variant of the method), are enforced
on a set of random interior/boundary collocation points. The resultant linear
or nonlinear algebraic system, through its least squares solution, provides the
trained values for the network parameters. With the second method the
high-dimensional PDE problem is reformulated through a constrained expression
based on an Approximate variant of the Theory of Functional Connections
(A-TFC), which avoids the exponential growth in the number of terms of TFC as
the dimension increases. The free field function in the A-TFC constrained
expression is represented by a randomized neural network and is trained by a
procedure analogous to the first method. We present ample numerical simulations
for a number of high-dimensional linear/nonlinear stationary/dynamic PDEs to
demonstrate their performance. These methods can produce accurate solutions to
high-dimensional PDEs, in particular with their errors reaching levels not far
from the machine accuracy for relatively lower dimensions. Compared with the
physics-informed neural network (PINN) method, the current method is both
cost-effective and more accurate for high-dimensional PDEs.Comment: 38 pages, 17 tables, 25 figure
- …