2,462 research outputs found

    Learning in neuro/fuzzy analog chips

    Get PDF
    This paper focus on the design of adaptive mixed-signal fuzzy chips. These chips have parallel architecture and feature electrically-controlable surface maps. The design methodology is based on the use of composite transistors - modular and well suited for design automation. This methodology is supported by dedicated, hardware-compatible learning algorithms that combine weight-perturbation and outstar

    MODELLING EXPECTATIONS WITH GENEFER- AN ARTIFICIAL INTELLIGENCE APPROACH

    Get PDF
    Economic modelling of financial markets means to model highly complex systems in which expectations can be the dominant driving forces. Therefore it is necessary to focus on how agents form their expectations. We believe that they look for patterns, hypothesize, try, make mistakes, learn and adapt. AgentsÆ bounded rationality leads us to a rule-based approach which we model using Fuzzy Rule-Bases. E. g. if a single agent believes the exchange rate is determined by a set of possible inputs and is asked to put their relationship in words his answer will probably reveal a fuzzy nature like: "IF the inflation rate in the EURO-Zone is low and the GDP growth rate is larger than in the US THEN the EURO will rise against the USD". æLowÆ and ælargerÆ are fuzzy terms which give a gradual linguistic meaning to crisp intervalls in the respective universes of discourse. In order to learn a Fuzzy Fuzzy Rule base from examples we introduce Genetic Algorithms and Artificial Neural Networks as learning operators. These examples can either be empirical data or originate from an economic simulation model. The software GENEFER (GEnetic NEural Fuzzy ExplorER) has been developed for designing such a Fuzzy Rule Base. The design process is modular and comprises Input Identification, Fuzzification, Rule-Base Generating and Rule-Base Tuning. The two latter steps make use of genetic and neural learning algorithms for optimizing the Fuzzy Rule-Base.

    Multi-learner based recursive supervised training

    Get PDF
    In this paper, we propose the Multi-Learner Based Recursive Supervised Training (MLRT) algorithm which uses the existing framework of recursive task decomposition, by training the entire dataset, picking out the best learnt patterns, and then repeating the process with the remaining patterns. Instead of having a single learner to classify all datasets during each recursion, an appropriate learner is chosen from a set of three learners, based on the subset of data being trained, thereby avoiding the time overhead associated with the genetic algorithm learner utilized in previous approaches. In this way MLRT seeks to identify the inherent characteristics of the dataset, and utilize it to train the data accurately and efficiently. We observed that empirically, MLRT performs considerably well as compared to RPHP and other systems on benchmark data with 11% improvement in accuracy on the SPAM dataset and comparable performances on the VOWEL and the TWO-SPIRAL problems. In addition, for most datasets, the time taken by MLRT is considerably lower than the other systems with comparable accuracy. Two heuristic versions, MLRT-2 and MLRT-3 are also introduced to improve the efficiency in the system, and to make it more scalable for future updates. The performance in these versions is similar to the original MLRT system

    Industrial process monitoring by means of recurrent neural networks and Self Organizing Maps

    Get PDF
    Industrial manufacturing plants often suffer from reliability problems during their day-to-day operations which have the potential for causing a great impact on the effectiveness and performance of the overall process and the sub-processes involved. Time-series forecasting of critical industrial signals presents itself as a way to reduce this impact by extracting knowledge regarding the internal dynamics of the process and advice any process deviations before it affects the productive process. In this paper, a novel industrial condition monitoring approach based on the combination of Self Organizing Maps for operating point codification and Recurrent Neural Networks for critical signal modeling is proposed. The combination of both methods presents a strong synergy, the information of the operating condition given by the interpretation of the maps helps the model to improve generalization, one of the drawbacks of recurrent networks, while assuring high accuracy and precision rates. Finally, the complete methodology, in terms of performance and effectiveness is validated experimentally with real data from a copper rod industrial plant.Postprint (published version

    A NONLINEAR APPROACH TO THE ANALYSIS AND MODELING OF TRAINING AND ADAPTATION IN SWIMMING

    Get PDF
    The purpose of the study was to demonstrate that the adaptative behavior of an elite female swimmer (Olympic silver medalist in the 400 m freestyle) can be modeled by means of the nonlinear mathematical method of a neural backpropagation network. Therefore, the training process of 107 successive weeks was carefully controlled and documented. For the data analysis a multilayer perceptron network was trained with the performance output data of 28 competitions within that time period and the training input data of the last four weeks prior to the respective competitions. After the iterative training procedure the neural network is able to model the resulting competitive performances on the basis of the training data from the two-week-taper phase and also from the earlier two-week-overload phase preceeding the respective competitions with high precision

    On the role of synaptic stochasticity in training low-precision neural networks

    Get PDF
    Stochasticity and limited precision of synaptic weights in neural network models are key aspects of both biological and hardware modeling of learning processes. Here we show that a neural network model with stochastic binary weights naturally gives prominence to exponentially rare dense regions of solutions with a number of desirable properties such as robustness and good generalization performance, while typical solutions are isolated and hard to find. Binary solutions of the standard perceptron problem are obtained from a simple gradient descent procedure on a set of real values parametrizing a probability distribution over the binary synapses. Both analytical and numerical results are presented. An algorithmic extension aimed at training discrete deep neural networks is also investigated.Comment: 7 pages + 14 pages of supplementary materia
    corecore