2,242 research outputs found

    Multi-learner based recursive supervised training

    Get PDF
    In this paper, we propose the Multi-Learner Based Recursive Supervised Training (MLRT) algorithm which uses the existing framework of recursive task decomposition, by training the entire dataset, picking out the best learnt patterns, and then repeating the process with the remaining patterns. Instead of having a single learner to classify all datasets during each recursion, an appropriate learner is chosen from a set of three learners, based on the subset of data being trained, thereby avoiding the time overhead associated with the genetic algorithm learner utilized in previous approaches. In this way MLRT seeks to identify the inherent characteristics of the dataset, and utilize it to train the data accurately and efficiently. We observed that empirically, MLRT performs considerably well as compared to RPHP and other systems on benchmark data with 11% improvement in accuracy on the SPAM dataset and comparable performances on the VOWEL and the TWO-SPIRAL problems. In addition, for most datasets, the time taken by MLRT is considerably lower than the other systems with comparable accuracy. Two heuristic versions, MLRT-2 and MLRT-3 are also introduced to improve the efficiency in the system, and to make it more scalable for future updates. The performance in these versions is similar to the original MLRT system

    Radial Basis Function Neural Networks : A Review

    Get PDF
    Radial Basis Function neural networks (RBFNNs) represent an attractive alternative to other neural network models. One reason is that they form a unifying link between function approximation, regularization, noisy interpolation, classification and density estimation. It is also the case that training RBF neural networks is faster than training multi-layer perceptron networks. RBFNN learning is usually split into an unsupervised part, where center and widths of the Gaussian basis functions are set, and a linear supervised part for weight computation. This paper reviews various learning methods for determining centers, widths, and synaptic weights of RBFNN. In addition, we will point to some applications of RBFNN in various fields. In the end, we name software that can be used for implementing RBFNNs

    Optimization of ANN Structure Using Adaptive PSO & GA and Performance Analysis Based on Boolean Identities

    Get PDF
    In this paper, a novel heuristic structure optimization technique is proposed for Neural Network using Adaptive PSO & GA on Boolean identities to improve the performance of Artificial Neural Network (ANN). The selection of the optimal number of hidden layers and nodes has a significant impact on the performance of a neural network, is decided in an adhoc manner. The optimization of architecture and weights of neural network is a complex task. In this regard the use of evolutionary techniques based on Adaptive Particle Swarm Optimization (APSO) & Adaptive Genetic Algorithm (AGA) is used for selecting an optimal number of hidden layers and nodes of the neural controller, for better performance and low training errors through Boolean identities. The hidden nodes are adapted through the generation until they reach the optimal number. The Boolean operators such as AND, OR, XOR have been used for performance analysis of this technique

    Neural networks in geophysical applications

    Get PDF
    Neural networks are increasingly popular in geophysics. Because they are universal approximators, these tools can approximate any continuous function with an arbitrary precision. Hence, they may yield important contributions to finding solutions to a variety of geophysical applications. However, knowledge of many methods and techniques recently developed to increase the performance and to facilitate the use of neural networks does not seem to be widespread in the geophysical community. Therefore, the power of these tools has not yet been explored to their full extent. In this paper, techniques are described for faster training, better overall performance, i.e., generalization,and the automatic estimation of network size and architecture

    State-of-the-art in aerodynamic shape optimisation methods

    Get PDF
    Aerodynamic optimisation has become an indispensable component for any aerodynamic design over the past 60 years, with applications to aircraft, cars, trains, bridges, wind turbines, internal pipe flows, and cavities, among others, and is thus relevant in many facets of technology. With advancements in computational power, automated design optimisation procedures have become more competent, however, there is an ambiguity and bias throughout the literature with regards to relative performance of optimisation architectures and employed algorithms. This paper provides a well-balanced critical review of the dominant optimisation approaches that have been integrated with aerodynamic theory for the purpose of shape optimisation. A total of 229 papers, published in more than 120 journals and conference proceedings, have been classified into 6 different optimisation algorithm approaches. The material cited includes some of the most well-established authors and publications in the field of aerodynamic optimisation. This paper aims to eliminate bias toward certain algorithms by analysing the limitations, drawbacks, and the benefits of the most utilised optimisation approaches. This review provides comprehensive but straightforward insight for non-specialists and reference detailing the current state for specialist practitioners

    Metaheuristic design of feedforward neural networks: a review of two decades of research

    Get PDF
    Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era

    Hybridization of neural network models for the prediction of Extreme Significant Wave Height segments

    Get PDF
    This work proposes a hybrid methodology for the detection and prediction of Extreme Significant Wave Height (ESWH) periods in oceans. In a first step, wave height time series is approximated by a labeled sequence of segments, which is obtained using a genetic algorithm in combination with a likelihood-based segmentation (GA+LS). Then, an artificial neural network classifier with hybrid basis functions is trained with a multiobjetive evolutionary algorithm (MOEA) in order to predict the occurrence of future ESWH segments based on past values. The methodology is applied to a buoy in the Gulf of Alaska and another one in Puerto Rico. The results show that the GA+LS is able to segment and group the ESWH values, and the neural network models, obtained by the MOEA, make good predictions maintaining a balance between global accuracy and minimum sensitivity for the detection of ESWH events. Moreover, hybrid neural networks are shown to lead to better results than pure models

    Multilayer Perceptron: Architecture Optimization and Training

    Get PDF
    The multilayer perceptron has a large wide of classification and regression applications in many fields: pattern recognition, voice and classification problems. But the architecture choice has a great impact on the convergence of these networks. In the present paper we introduce a new approach to optimize the network architecture, for solving the obtained model we use the genetic algorithm and we train the network with a back-propagation algorithm. The numerical results assess the effectiveness of the theoretical results shown in this paper, and the advantages of the new modeling compared to the previous model in the literature

    State‐of‐the‐Art Nonprobabilistic Finite Element Analyses

    Get PDF
    The finite element analysis of a mechanical system is conventionally performed in the context of deterministic inputs. However, uncertainties associated with material properties, geometric dimensions, subjective experiences, boundary conditions, and external loads are ubiquitous in engineering applications. The most popular techniques to handle these uncertain parameters are the probabilistic methods, in which uncertainties are modeled as random variables or stochastic processes based on a large amount of statistical information on each uncertain parameter. Nevertheless, subjective results could be obtained if insufficient information unavailable and nonprobabilistic methods can be alternatively employed, which has led to elegant procedures for the nonprobabilistic finite element analysis. In this chapter, each nonprobabilistic finite element analysis method can be decomposed as two individual parts, i.e., the core algorithm and preprocessing procedure. In this context, four types of algorithms and two typical preprocessing procedures as well as their effectiveness were described in detail, based on which novel hybrid algorithms can be conceived for the specific problems and the future work in this research field can be fostered
    corecore