795 research outputs found

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    Intelligent Voltage Sag Compensation Using an Artificial Neural Network (ANN)-Based Dynamic Voltage Restorer in MATLAB Simulink

    Get PDF
    An innovative Dynamic Voltage Restorer (DVR) system based on Artificial Neural Network (ANN) technology, implemented in MATLAB Simulink, accurately detects, and dynamically restores voltage sags, significantly improving power quality and ensuring a reliable supply to critical loads, contributing to the advancement of power quality enhancement techniques. Voltage sags are a prevalent power quality concern that can have a significant impact on sensitive electrical equipment. An innovative approach to address voltage sags through the operation of a Dynamic Voltage Restorer (DVR) based on Artificial Neural Network (ANN) technology. The proposed system, developed using MATLAB Simulink, leverages the ANN's capabilities to accurately detect voltage sags and dynamically restore the voltage to the affected load. The ANN is trained using a comprehensive dataset comprising voltage sag events, enabling it to learn the intricate relationships between sag characteristics and optimal compensation techniques. By integrating the trained ANN into the DVR control scheme, real-time compensation for voltage sags is achieved. The effectiveness of the proposed system is rigorously evaluated through extensive simulations and performance analysis. The results demonstrate the superior performance of the ANN-based DVR in terms of voltage sag detection accuracy and restoration precision. Consequently, the proposed system presents an intelligent and adaptive solution for voltage sag compensation, ensuring a reliable and high-quality power supply to critical loads. This research contributes to the advancement of power quality enhancement techniques, facilitating the implementation of intelligent power system

    Intelligent Voltage Sag Compensation Using an Artificial Neural Network (ANN)-Based Dynamic Voltage Restorer in MATLAB Simulink

    Get PDF
    An innovative Dynamic Voltage Restorer (DVR) system based on Artificial Neural Network (ANN) technology, implemented in MATLAB Simulink, accurately detects, and dynamically restores voltage sags, significantly improving power quality and ensuring a reliable supply to critical loads, contributing to the advancement of power quality enhancement techniques. Voltage sags are a prevalent power quality concern that can have a significant impact on sensitive electrical equipment. An innovative approach to address voltage sags through the operation of a Dynamic Voltage Restorer (DVR) based on Artificial Neural Network (ANN) technology. The proposed system, developed using MATLAB Simulink, leverages the ANN's capabilities to accurately detect voltage sags and dynamically restore the voltage to the affected load. The ANN is trained using a comprehensive dataset comprising voltage sag events, enabling it to learn the intricate relationships between sag characteristics and optimal compensation techniques. By integrating the trained ANN into the DVR control scheme, real-time compensation for voltage sags is achieved. The effectiveness of the proposed system is rigorously evaluated through extensive simulations and performance analysis. The results demonstrate the superior performance of the ANN-based DVR in terms of voltage sag detection accuracy and restoration precision. Consequently, the proposed system presents an intelligent and adaptive solution for voltage sag compensation, ensuring a reliable and high-quality power supply to critical loads. This research contributes to the advancement of power quality enhancement techniques, facilitating the implementation of intelligent power system

    Efficient Mapping of Neural Network Models on a Class of Parallel Architectures.

    Get PDF
    This dissertation develops a formal and systematic methodology for efficient mapping of several contemporary artificial neural network (ANN) models on k-ary n-cube parallel architectures (KNC\u27s). We apply the general mapping to several important ANN models including feedforward ANN\u27s trained with backpropagation algorithm, radial basis function networks, cascade correlation learning, and adaptive resonance theory networks. Our approach utilizes a parallel task graph representing concurrent operations of the ANN model during training. The mapping of the ANN is performed in two steps. First, the parallel task graph of the ANN is mapped to a virtual KNC of compatible dimensionality. This involves decomposing each operation into its atomic tasks. Second, the dimensionality of the virtual KNC architecture is recursively reduced through a sequence of transformations until a desired metric is optimized. We refer to this process as folding the virtual architecture. The optimization criteria we consider in this dissertation are defined in terms of the iteration time of the algorithm on the folded architecture. If necessary, the mapping scheme may utilize a subset of the processors of a given KNC architecture if it results in the most efficient simulation. A unique feature of our mapping is that it systematically selects an appropriate degree of parallelism leading to a highly efficient realization of the ANN model on KNC architectures. A novel feature of our work is its ability to efficiently map unit-allocating ANN\u27s. These networks possess a dynamic structure which grows during training. We present a highly efficient scheme for simulating such networks on existing KNC parallel architectures. We assume an upper bound on size of the neural network We perform the folding such that the iteration time of the largest network is minimized. We show that our mapping leads to near-optimal simulation of smaller instances of the neural network. In addition, based on our mapping no data migration or task rescheduling is needed as the size of network grows

    Adaptive learning in a compartmental model of visual cortex—how feedback enables stable category learning and refinement

    Get PDF
    The categorization of real world objects is often reflected in the similarity of their visual appearances. Such categories of objects do not necessarily form disjunct sets of objects, neither semantically nor visually. The relationship between categories can often be described in terms of a hierarchical structure. For instance, tigers and leopards build two separate mammalian categories, but both belong to the category of felines. In other words, tigers and leopards are subcategories of the category Felidae. In the last decades, the unsupervised learning of categories of visual input stimuli has been addressed by numerous approaches in machine learning as well as in the computational neurosciences. However, the question of what kind of mechanisms might be involved in the process of subcategory learning, or category refinement, remains a topic of active investigation. We propose a recurrent computational network architecture for the unsupervised learning of categorial and subcategorial visual input representations. During learning, the connection strengths of bottom-up weights from input to higher-level category representations are adapted according to the input activity distribution. In a similar manner, top-down weights learn to encode the characteristics of a specific stimulus category. Feedforward and feedback learning in combination realize an associative memory mechanism, enabling the selective top-down propagation of a category's feedback weight distribution. We suggest that the difference between the expected input encoded in the projective field of a category node and the current input pattern controls the amplification of feedforward-driven representations. Large enough differences trigger the recruitment of new representational resources and the establishment of (sub-) category representations. We demonstrate the temporal evolution of such learning and show how the approach successully establishes category and subcategory representations
    corecore