7,792 research outputs found

    Learning Opposites Using Neural Networks

    Full text link
    Many research works have successfully extended algorithms such as evolutionary algorithms, reinforcement agents and neural networks using "opposition-based learning" (OBL). Two types of the "opposites" have been defined in the literature, namely \textit{type-I} and \textit{type-II}. The former are linear in nature and applicable to the variable space, hence easy to calculate. On the other hand, type-II opposites capture the "oppositeness" in the output space. In fact, type-I opposites are considered a special case of type-II opposites where inputs and outputs have a linear relationship. However, in many real-world problems, inputs and outputs do in fact exhibit a nonlinear relationship. Therefore, type-II opposites are expected to be better in capturing the sense of "opposition" in terms of the input-output relation. In the absence of any knowledge about the problem at hand, there seems to be no intuitive way to calculate the type-II opposites. In this paper, we introduce an approach to learn type-II opposites from the given inputs and their outputs using the artificial neural networks (ANNs). We first perform \emph{opposition mining} on the sample data, and then use the mined data to learn the relationship between input xx and its opposite x˘\breve{x}. We have validated our algorithm using various benchmark functions to compare it against an evolving fuzzy inference approach that has been recently introduced. The results show the better performance of a neural approach to learn the opposites. This will create new possibilities for integrating oppositional schemes within existing algorithms promising a potential increase in convergence speed and/or accuracy.Comment: To appear in proceedings of the 23rd International Conference on Pattern Recognition (ICPR 2016), Cancun, Mexico, December 201

    Opening the Neural Network Black Box: An Algorithm for Extracting Rules from Function Approximating Artificial Neural Networks

    Get PDF
    Artificial neural networks have been successfully applied to solve a variety of business applications involving classification and function approximation. In many such applications, it is desirable to extract knowledge from trained neural networks so that the users can gain a better understanding of the solution. Existing research has focused primarily on extracting symbolic rules for classification problems with few methods devised for function approximation problems. In order to fill this gap, we propose an approach to extract rules from neural networks that have been trained to solve function approximation problems. The extracted rules divide the data samples into groups. For all samples within a group, a linear function of the relevant input attributes of the data approximates the network output. Experimental results show that the proposed approach generates rules that are more accurate than the existing methods based on decision trees and regression

    Mixed transfer function neural networks for knowledge acquistition

    Full text link
    Modeling helps to understand and predict the outcome of complex systems. Inductive modeling methodologies are beneficial for modeling the systems where the uncertainties involved in the system do not permit to obtain an accurate physical model. However inductive models, like artificial neural networks (ANNs), may suffer from a few drawbacks involving over-fitting and the difficulty to easily understand the model itself. This can result in user reluctance to accept the model or even complete rejection of the modeling results. Thus, it becomes highly desirable to make such inductive models more comprehensible and to automatically determine the model complexity to avoid over-fitting. In this paper, we propose a novel type of ANN, a mixed transfer function artificial neural network (MTFANN), which aims to improve the complexity fitting and comprehensibility of the most popular type of ANN (MLP - a Multilayer Perceptron).<br /
    corecore