77,112 research outputs found

    Lazy learning: a biologically-inspired plasticity rule for fast and energy efficient synaptic plasticity

    Full text link
    When training neural networks for classification tasks with backpropagation, parameters are updated on every trial, even if the sample is classified correctly. In contrast, humans concentrate their learning effort on errors. Inspired by human learning, we introduce lazy learning, which only learns on incorrect samples. Lazy learning can be implemented in a few lines of code and requires no hyperparameter tuning. Lazy learning achieves state-of-the-art performance and is particularly suited when datasets are large. For instance, it reaches 99.2% test accuracy on Extended MNIST using a single-layer MLP, and does so 7.6x faster than a matched backprop networkComment: 13 pages, 6 figure

    Learning radial basis neural networks in a lazy way: A comparative study

    Get PDF
    Lazy learning methods have been used to deal with problems in which the learning examples are not evenly distributed in the input space. They are based on the selection of a subset of training patterns when a new query is received. Usually, that selection is based on the k closest neighbors and it is a static selection, because the number of patterns selected does not depend on the input space region in which the new query is placed. In this paper, a lazy strategy is applied to train radial basis neural networks. That strategy incorporates a dynamic selection of patterns, and that selection is based on two different kernel functions, the Gaussian and the inverse function. This lazy learning method is compared with the classical lazy machine learning methods and with eagerly trained radial basis neural networks.Publicad

    THE ROLE OF THE TEACHER TO OVERCOME THE LAZY STUDENTS AT SMA NEGERI 1 ATAMBUA INDONESIA

    Get PDF
    This qualitative study aimed to (1) knows the factors that cause students to be lazy in learning at SMA Negeri 1 Atambua and (2) know the role of the teacher to overcome those lazy students at SMA Negeri 1 Atambua. The subjects of the study were students, teachers, and vice principals. The data were collected through observation, interviews, and documentation. The data were analyzed through data reduction, data display, and drawing conclusions. The study showed that the factors that cause students to be lazy in learning, such as being full of assignments, learning methods, social environment, cellphone, social media, and poorly organized classrooms. Thus, there were 4 students who only got scores of 63, 66, 64, and 68, while the standard minimum criteria (KKM) was 70. Besides, teachers provided direct personal guidance and visited the students to overcome the lazy students in learning

    A lazy learning approach for building classification models

    Get PDF
    In this paper, we propose a lazy learning strategy for building classification learning models. Instead of learning the models with the whole training data set before observing the new instance, a selection of patterns is made depending on the new query received and a classification model is learnt with those selected patterns. The selection of patterns is not homogeneous, in the sense that the number of selected patterns depends on the position of the query instance in the input space. That selection is made using a weighting function to give more importance to the training patterns that are more similar to the query instance. Our intention is to provide a lazy learning mechanism suited to any machine learning classification algorithm. For this reason, we study two different methods to avoid fixing any parameter. Experimental results show that classification rates of traditional machine learning algorithms based on trees, rules, or functions can be improved when they are learnt with the lazy learning approach proposed.This work has been funded by the Spanish Ministry of Science under contract TIN2008-06491-C04-03 (MSTAR project).Publicad

    A Finite State and Data-Oriented Method for Grapheme to Phoneme Conversion

    Full text link
    A finite-state method, based on leftmost longest-match replacement, is presented for segmenting words into graphemes, and for converting graphemes into phonemes. A small set of hand-crafted conversion rules for Dutch achieves a phoneme accuracy of over 93%. The accuracy of the system is further improved by using transformation-based learning. The phoneme accuracy of the best system (using a large set of rule templates and a `lazy' variant of Brill's algoritm), trained on only 40K words, reaches 99% accuracy.Comment: 8 page

    Multifractal analysis of perceptron learning with errors

    Full text link
    Random input patterns induce a partition of the coupling space of a perceptron into cells labeled by their output sequences. Learning some data with a maximal error rate leads to clusters of neighboring cells. By analyzing the internal structure of these clusters with the formalism of multifractals, we can handle different storage and generalization tasks for lazy students and absent-minded teachers within one unified approach. The results also allow some conclusions on the spatial distribution of cells.Comment: 11 pages, RevTex, 3 eps figures, version to be published in Phys. Rev. E 01Jan9
    • …
    corecore