2,825 research outputs found

    Theoretical Interpretations and Applications of Radial Basis Function Networks

    Get PDF
    Medical applications usually used Radial Basis Function Networks just as Artificial Neural Networks. However, RBFNs are Knowledge-Based Networks that can be interpreted in several way: Artificial Neural Networks, Regularization Networks, Support Vector Machines, Wavelet Networks, Fuzzy Controllers, Kernel Estimators, Instanced-Based Learners. A survey of their interpretations and of their corresponding learning algorithms is provided as well as a brief survey on dynamic learning algorithms. RBFNs' interpretations can suggest applications that are particularly interesting in medical domains

    Learning Opposites with Evolving Rules

    Full text link
    The idea of opposition-based learning was introduced 10 years ago. Since then a noteworthy group of researchers has used some notions of oppositeness to improve existing optimization and learning algorithms. Among others, evolutionary algorithms, reinforcement agents, and neural networks have been reportedly extended into their opposition-based version to become faster and/or more accurate. However, most works still use a simple notion of opposites, namely linear (or type- I) opposition, that for each x∈[a,b]x\in[a,b] assigns its opposite as x˘I=a+b−x\breve{x}_I=a+b-x. This, of course, is a very naive estimate of the actual or true (non-linear) opposite x˘II\breve{x}_{II}, which has been called type-II opposite in literature. In absence of any knowledge about a function y=f(x)y=f(\mathbf{x}) that we need to approximate, there seems to be no alternative to the naivety of type-I opposition if one intents to utilize oppositional concepts. But the question is if we can receive some level of accuracy increase and time savings by using the naive opposite estimate x˘I\breve{x}_I according to all reports in literature, what would we be able to gain, in terms of even higher accuracies and more reduction in computational complexity, if we would generate and employ true opposites? This work introduces an approach to approximate type-II opposites using evolving fuzzy rules when we first perform opposition mining. We show with multiple examples that learning true opposites is possible when we mine the opposites from the training data to subsequently approximate x˘II=f(x,y)\breve{x}_{II}=f(\mathbf{x},y).Comment: Accepted for publication in The 2015 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE 2015), August 2-5, 2015, Istanbul, Turke

    MODELLING EXPECTATIONS WITH GENEFER- AN ARTIFICIAL INTELLIGENCE APPROACH

    Get PDF
    Economic modelling of financial markets means to model highly complex systems in which expectations can be the dominant driving forces. Therefore it is necessary to focus on how agents form their expectations. We believe that they look for patterns, hypothesize, try, make mistakes, learn and adapt. AgentsÆ bounded rationality leads us to a rule-based approach which we model using Fuzzy Rule-Bases. E. g. if a single agent believes the exchange rate is determined by a set of possible inputs and is asked to put their relationship in words his answer will probably reveal a fuzzy nature like: "IF the inflation rate in the EURO-Zone is low and the GDP growth rate is larger than in the US THEN the EURO will rise against the USD". æLowÆ and ælargerÆ are fuzzy terms which give a gradual linguistic meaning to crisp intervalls in the respective universes of discourse. In order to learn a Fuzzy Fuzzy Rule base from examples we introduce Genetic Algorithms and Artificial Neural Networks as learning operators. These examples can either be empirical data or originate from an economic simulation model. The software GENEFER (GEnetic NEural Fuzzy ExplorER) has been developed for designing such a Fuzzy Rule Base. The design process is modular and comprises Input Identification, Fuzzification, Rule-Base Generating and Rule-Base Tuning. The two latter steps make use of genetic and neural learning algorithms for optimizing the Fuzzy Rule-Base.

    Missing Value Imputation With Unsupervised Backpropagation

    Full text link
    Many data mining and data analysis techniques operate on dense matrices or complete tables of data. Real-world data sets, however, often contain unknown values. Even many classification algorithms that are designed to operate with missing values still exhibit deteriorated accuracy. One approach to handling missing values is to fill in (impute) the missing values. In this paper, we present a technique for unsupervised learning called Unsupervised Backpropagation (UBP), which trains a multi-layer perceptron to fit to the manifold sampled by a set of observed point-vectors. We evaluate UBP with the task of imputing missing values in datasets, and show that UBP is able to predict missing values with significantly lower sum-squared error than other collaborative filtering and imputation techniques. We also demonstrate with 24 datasets and 9 supervised learning algorithms that classification accuracy is usually higher when randomly-withheld values are imputed using UBP, rather than with other methods
    • …
    corecore