38,873 research outputs found
An adaptive and modular framework for evolving deep neural networks
Santos, F. J. J. B., Gonçalves, I., & Castelli, M. (2023). Neuroevolution with box mutation: An adaptive and modular framework for evolving deep neural networks. Applied Soft Computing, 147(November), 1-15. [110767]. https://doi.org/10.1016/j.asoc.2023.110767 --- Funding: This work is funded by national funds through the FCT - Foundation for Science and Technology, I.P., within the scope of the projects CISUC - UID/CEC/00326/2020, UIDB/04152/2020 - Centro de Investigação em Gestão de Informação (MagIC)/NOVA IMS, and by European Social Fund, through the Regional Operational Program Centro 2020 .The pursuit of self-evolving neural networks has driven the emerging field of Evolutionary Deep Learning, which combines the strengths of Deep Learning and Evolutionary Computation. This work presents a novel method for evolving deep neural networks by adapting the principles of Geometric Semantic Genetic Programming, a subfield of Genetic Programming, and Semantic Learning Machine. Our approach integrates evolution seamlessly through natural selection with the optimization power of backpropagation in deep learning, enabling the incremental growth of neural networks’ neurons across generations. By evolving neural networks that achieve nearly 89% accuracy on the CIFAR-10 dataset with relatively few parameters, our method demonstrates remarkable efficiency, evolving in GPU minutes compared to the field standard of GPU days.publishersversionpublishe
Neural-learning-based force sensorless admittance control for robots with input deadzone
This paper presents a neural networks based admittance control scheme for robotic manipulators when interacting with the unknown environment in the presence of the actuator deadzone without needing force sensing. A compliant behaviour of robotic manipulators in response to external torques from the unknown environment is achieved by admittance control. Inspired by broad learning system (BLS), a flatted neural network structure using Radial Basis Function (RBF) with incremental learning algorithm is proposed to estimate the external torque, which can avoid retraining process if the system is modelled insufficiently. To deal with uncertainties in the robot system, an adaptive neural controller with dynamic learning framework is developed to ensure the tracking performance. Experiments on the Baxter robot have been implemented to test the effectiveness of the proposed method
Adaptive Reorganization of Neural Pathways for Continual Learning with Spiking Neural Networks
The human brain can self-organize rich and diverse sparse neural pathways to
incrementally master hundreds of cognitive tasks. However, most existing
continual learning algorithms for deep artificial and spiking neural networks
are unable to adequately auto-regulate the limited resources in the network,
which leads to performance drop along with energy consumption rise as the
increase of tasks. In this paper, we propose a brain-inspired continual
learning algorithm with adaptive reorganization of neural pathways, which
employs Self-Organizing Regulation networks to reorganize the single and
limited Spiking Neural Network (SOR-SNN) into rich sparse neural pathways to
efficiently cope with incremental tasks. The proposed model demonstrates
consistent superiority in performance, energy consumption, and memory capacity
on diverse continual learning tasks ranging from child-like simple to complex
tasks, as well as on generalized CIFAR100 and ImageNet datasets. In particular,
the SOR-SNN model excels at learning more complex tasks as well as more tasks,
and is able to integrate the past learned knowledge with the information from
the current task, showing the backward transfer ability to facilitate the old
tasks. Meanwhile, the proposed model exhibits self-repairing ability to
irreversible damage and for pruned networks, could automatically allocate new
pathway from the retained network to recover memory for forgotten knowledge
A Survey of Adaptive Resonance Theory Neural Network Models for Engineering Applications
This survey samples from the ever-growing family of adaptive resonance theory
(ART) neural network models used to perform the three primary machine learning
modalities, namely, unsupervised, supervised and reinforcement learning. It
comprises a representative list from classic to modern ART models, thereby
painting a general picture of the architectures developed by researchers over
the past 30 years. The learning dynamics of these ART models are briefly
described, and their distinctive characteristics such as code representation,
long-term memory and corresponding geometric interpretation are discussed.
Useful engineering properties of ART (speed, configurability, explainability,
parallelization and hardware implementation) are examined along with current
challenges. Finally, a compilation of online software libraries is provided. It
is expected that this overview will be helpful to new and seasoned ART
researchers
A Constructive, Incremental-Learning Network for Mixture Modeling and Classification
Gaussian ARTMAP (GAM) is a supervised-learning adaptive resonance theory (ART) network that uses Gaussian-defined receptive fields. Like other ART networks, GAM incrementally learns and constructs a representation of sufficient complexity to solve a problem it is trained on. GAM's representation is a Gaussian mixture model of the input space, with learned mappings from the mixture components to output classes. We show a close relationship between GAM and the well-known Expectation-Maximization (EM) approach to mixture-modeling. GAM outperforms an EM classification algorithm on a classification benchmark, thereby demonstrating the advantage of the ART match criterion for regulating learning, and the ARTMAP match tracking operation for incorporate environmental feedback in supervised learning situations.Office of Naval Research (N00014-95-1-0409
A Constructive, Incremental-Learning Network for Mixture Modeling and Classification
Gaussian ARTMAP (GAM) is a supervised-learning adaptive resonance theory (ART) network that uses Gaussian-defined receptive fields. Like other ART networks, GAM incrementally learns and constructs a representation of sufficient complexity to solve a problem it is trained on. GAM's representation is a Gaussian mixture model of the input space, with learned mappings from the mixture components to output classes. We show a close relationship between GAM and the well-known Expectation-Maximization (EM) approach to mixture-modeling. GAM outperforms an EM classification algorithm on a classification benchmark, thereby demonstrating the advantage of the ART match criterion for regulating learning, and the ARTMAP match tracking operation for incorporate environmental feedback in supervised learning situations.Office of Naval Research (N00014-95-1-0409
- …