30 research outputs found

    Lazy training of radial basis neural networks

    Get PDF
    Proceeding of: 16th International Conference on Artificial Neural Networks, ICANN 2006. Athens, Greece, September 10-14, 2006Usually, training data are not evenly distributed in the input space. This makes non-local methods, like Neural Networks, not very accurate in those cases. On the other hand, local methods have the problem of how to know which are the best examples for each test pattern. In this work, we present a way of performing a trade off between local and non-local methods. On one hand a Radial Basis Neural Network is used like learning algorithm, on the other hand a selection of the training patterns is used for each query. Moreover, the RBNN initialization algorithm has been modified in a deterministic way to eliminate any initial condition influence. Finally, the new method has been validated in two time series domains, an artificial and a real world one.This article has been financed by the Spanish founded research MEC project OPLINK::UC3M, Ref: TIN2005-08818-C04-0

    Learning radial basis neural networks in a lazy way: A comparative study

    Get PDF
    Lazy learning methods have been used to deal with problems in which the learning examples are not evenly distributed in the input space. They are based on the selection of a subset of training patterns when a new query is received. Usually, that selection is based on the k closest neighbors and it is a static selection, because the number of patterns selected does not depend on the input space region in which the new query is placed. In this paper, a lazy strategy is applied to train radial basis neural networks. That strategy incorporates a dynamic selection of patterns, and that selection is based on two different kernel functions, the Gaussian and the inverse function. This lazy learning method is compared with the classical lazy machine learning methods and with eagerly trained radial basis neural networks.Publicad

    Composition Classification of Ultra-High Energy Cosmic Rays

    Get PDF
    The study of cosmic rays remains as one of the most challenging research fields in Physics. From the many questions still open in this area, knowledge of the type of primary for each event remains as one of the most important issues. All of the cosmic rays observatories have been trying to solve this question for at least six decades, but have not yet succeeded. The main obstacle is the impossibility of directly detecting high energy primary events, being necessary to use Monte Carlo models and simulations to characterize generated particles cascades. This work presents the results attained using a simulated dataset that was provided by the Monte Carlo code CORSIKA, which is a simulator of high energy particles interactions with the atmosphere, resulting in a cascade of secondary particles extending for a few kilometers (in diameter) at ground level. Using this simulated data, a set of machine learning classifiers have been designed and trained, and their computational cost and effectiveness compared, when classifying the type of primary under ideal measuring conditions. Additionally, a feature selection algorithm has allowed for identifying the relevance of the considered features. The results confirm the importance of the electromagnetic-muonic component separation from signal data measured for the problem. The obtained results are quite encouraging and open new work lines for future more restrictive simulations.Spanish Ministry of Science, Innovation and Universities FPA2017-85197-P RTI2018-101674-B-I00European Union (EU)CENAPAD-SP (Centro Nacional de Processamento de Alto Desempenho em Sao Paulo) UNICAMP/FINEP - MCTFundacao de Amparo a Pesquisa do Estado de Sao Paulo (FAPESP)National Council for Scientific and Technological Development (CNPq) 2016/19764-9404993/2016-

    Phonetic Feature Discovery in Speech using Snap-Drift

    Get PDF
    This paper presents a new application of the snapdrift algorithm [1]: feature discovery and clustering of speech waveforms from nonstammering and stammering speakers. The learning algorithm is an unsupervised version of snapdrift which employs the complementary concepts of fast, minimalist learning (snap) & slow drift (towards the input pattern) learning. The SnapDrift Neural Network (SDNN) is toggled between snap and drift modes on successive epochs. The speech waveforms are drawn from a phonetically annotated corpus, which facilitates phonetic interpretation of the classes of patterns discovered by the SDNN

    Neural networks for variational problems in engineering

    Get PDF
    In this work a conceptual theory of neural networks (NNs) from the perspective of functional analysis and variational calculus is presented. Within this formulation, the learning problem for the multilayer perceptron lies in terms of finding a function, which is an extremal for some functional. Therefore, a variational formulation for NNs provides a direct method for the solution of variational problems. This proposed method is then applied to distinct types of engineering problems. In particular a shape design, an optimal control and an inverse problem are considered. The selected examples can be solved analytically, which enables a fair comparison with the NN results. Copyright © 2008 John Wiley & Sons, Ltd

    Combining neural modes of learning for handwritten digit recognition

    Get PDF
    An ADaptive Function Neural Network (ADFUNN) is combined with the on-line snapdrift learning method in this paper to perform optical and pen-based recognition of handwritten digits. Snap-Drift employs the complementary concepts of minimalist common feature learning (snap) and vector quantization (drift towards the input patterns), and is a fast unsupervised method suitable for real-time learning and non-stationary environments where new patterns are continually introduced. The ADaptive FUction Neural Network (ADFUNN) is based on a linear piecewise neuron activation function that is modified by a gradient descent supervised learning algorithm. It has previously been applied to the Iris dataset, and a natural language phrase recognition problem, exhibiting impressive generalisation classification ability without the hidden neurons that are usually required for linearly inseparable data. The unsupervised single layer Snap-Drift is effective in extracting distinct features from the complex cursive-letter datasets, and the supervised single layer ADFUNN is capable of solving linearly inseparable problems rapidly. In combination within one network (SADFUNN), these two methods are more powerful and yet simpler than MLPs (a standard neural network), at least on this problem domain. The optical and pen-based handwritten digits data are from UCI machine learning repository. The classifications are learned rapidly and produce higher generalisation results than a MLP with standard learning methods

    Diagnostic Feedback by Snap-drift Question Response Grouping

    Get PDF
    This work develops a method for incorporation into an on-line system to provide carefully targeted guidance and feedback to students. The student answers on-line multiple choice questions on a selected topic, and their responses are sent to a Snap-Drift neural network trained with responses from a past students. Snap-drift is able to categorise the learner's responses as having a significant level of similarity with a subset of the students it has previously categorised. Each category is associated with feedback composed by the lecturer on the basis of the level of understanding and prevalent misconceptions of that category-group of students. In this way the feedback addresses the level of knowledge of the individual and guides them towards a greater understanding of particular concepts. The feedback is concept-based rather than tied to any particular question, and so the learner is encouraged to retake the same test and receives different feedback depending on their evolving state of knowledge

    Question response grouping for online diagnostic feedback

    Get PDF
    This work develops a method for incorporation into an online system to provide carefully targeted guidance and feedback to students. The student answers online multiple choice questions on a selected topic, and their responses are sent to a SnapDrift neural network trained with responses from past students. Snapdrift is able to categorise the learner's responses as having a significant level of similarity with a subset of the students it has previously categorised. Each category is associated with feedback composed by the lecturer on the basis of the level of understanding and prevalent misconceptions of that categorygroup of students. In this way the feedback addresses the level of knowledge of the individual and guides them towards a greater understanding of particular concepts. The feedback is conceptbased rather than tied to any particular question, and so the learner is encouraged to retake the same test and receives different feedback depending on their evolving state of knowledge. This approach has been applied to two data sets related to topics from an Introduction to Computer System module and a Research Skills module

    Shape optimization in aeronautical applications using neural networks

    Get PDF
    An optimization methodology based on neural networks was developed for use in 2D optimal shape design problems. Neural networks were used as a parameterization scheme to represent the shape function, and an edge-based high-resolution scheme for the solution of the compressible Euler equations was used to model the flow around the shape. The global system incorporates neural networks and the Euler fluid solver into the C++ Flood optimization framework containing a library of optimization algorithms. The optimization scheme was applied to a minimal drag problem in an unconstrained optimization case and a constrained case in hypersonic flow using evolutionary training algorithms. The results indicate that the minimum drag problem is solved to a high degree of accuracy but at high computational cost. For more complex shapes, parallel computing methods are required to reduce computational time.Preprin
    corecore