181,959 research outputs found

    An algorithm for learning from hints

    Get PDF
    To take advantage of prior knowledge (hints) about the function one wants to learn, we introduce a method that generalizes learning from examples to learning from hints. A canonical representation of hints is defined and illustrated. All hints are represented to the learning process by examples, and examples of the function are treated on equal footing with the rest of the hints. During learning, examples from different hints are selected for processing according to a given schedule. We present two types of schedules; fixed schedules that specify the relative emphasis of each hint, and adaptive schedules that are based on how well each hint has been learned so far. Our learning method is compatible with any descent technique

    Hints

    Get PDF
    The systematic use of hints in the learning-from-examples paradigm is the subject of this review. Hints are the properties of the target function that are known to us independently of the training examples. The use of hints is tantamount to combining rules and data in learning, and is compatible with different learning models, optimization techniques, and regularization techniques. The hints are represented to the learning process by virtual examples, and the training examples of the target function are treated on equal footing with the rest of the hints. A balance is achieved between the information provided by the different hints through the choice of objective functions and learning schedules. The Adaptive Minimization algorithm achieves this balance by relating the performance on each hint to the overall performance. The application of hints in forecasting the very noisy foreign-exchange markets is illustrated. On the theoretical side, the information value of hints is contrasted to the complexity value and related to the VC dimension

    Hints and the VC Dimension

    Get PDF
    Learning from hints is a generalization of learning from examples that allows for a variety of information about the unknown function to be used in the learning process. In this paper, we use the VC dimension, an established tool for analyzing learning from examples, to analyze learning from hints. In particular, we show how the VC dimension is affected by the introduction of a hint. We also derive a new quantity that defines a VC dimension for the hint itself. This quantity is used to estimate the number of examples needed to "absorb" the hint. We carry out the analysis for two types of hints, invariances and catalysts. We also describe how the same method can be applied to other types of hints

    A Method for Learning from Hints

    Get PDF
    We address the problem of learning an unknown function by putting together several pieces of information (hints) that we know about the function. We introduce a method that generalizes learning from examples to learning from hints. A canonical representation of hints is defined and illustrated for new types of hints. All the hints are represented to the learning process by examples, and examples of the function are treated on equal footing with the rest of the hints. During learning, examples from different hints are selected for processing according to a given schedule. We present two types of schedules; fixed schedules that specify the relative emphasis of each hint, and adaptive schedules that are based on how well each hint has been learned so far. Our learning method is compatible with any descent technique that we may choose to use

    Financial model calibration using consistency hints

    Get PDF
    We introduce a technique for forcing the calibration of a financial model to produce valid parameters. The technique is based on learning from hints. It converts simple curve fitting into genuine calibration, where broad conclusions can be inferred from parameter values. The technique augments the error function of curve fitting with consistency hint error functions based on the Kullback-Leibler distance. We introduce an efficient EM-type optimization algorithm tailored to this technique. We also introduce other consistency hints, and balance their weights using canonical errors. We calibrate the correlated multifactor Vasicek model of interest rates, and apply it successfully to Japanese Yen swaps market and US dollar yield market

    Learning from Hints

    Get PDF
    AbstractWe present a systematic method for incorporating prior knowledge (hints) into the learning-from-examples paradigm. The hints are represented in a canonical form that is compatible with descent techniques for learning. All the hints are fed to the learning process in the form of examples, and examples of the function are treated on equal footing with the rest of the hints. During learning, examples from different hints are selected for processing according to a fixed or adaptive schedule. Fixed schedules specify the relative emphasis of each hint, and adaptive schedules are based on how well each hint has been learned so far. We discuss adaptive minimization which is based on estimates of the overall learning error

    What learning analytics based prediction models tell us about feedback preferences of students

    Get PDF
    Learning analytics (LA) seeks to enhance learning processes through systematic measurements of learning related data and to provide informative feedback to learners and educators (Siemens & Long, 2011). This study examined the use of preferred feedback modes in students by using a dispositional learning analytics framework, combining learning disposition data with data extracted from digital systems. We analyzed the use of feedback of 1062 students taking an introductory mathematics and statistics course, enhanced with digital tools. Our findings indicated that compared with hints, fully worked-out solutions demonstrated a stronger effect on academic performance and acted as a better mediator between learning dispositions and academic performance. This study demonstrated how e-learners and their data can be effectively re-deployed to provide meaningful insights to both educators and learners

    BridgeNets: Student-Teacher Transfer Learning Based on Recursive Neural Networks and its Application to Distant Speech Recognition

    Full text link
    Despite the remarkable progress achieved on automatic speech recognition, recognizing far-field speeches mixed with various noise sources is still a challenging task. In this paper, we introduce novel student-teacher transfer learning, BridgeNet which can provide a solution to improve distant speech recognition. There are two key features in BridgeNet. First, BridgeNet extends traditional student-teacher frameworks by providing multiple hints from a teacher network. Hints are not limited to the soft labels from a teacher network. Teacher's intermediate feature representations can better guide a student network to learn how to denoise or dereverberate noisy input. Second, the proposed recursive architecture in the BridgeNet can iteratively improve denoising and recognition performance. The experimental results of BridgeNet showed significant improvements in tackling the distant speech recognition problem, where it achieved up to 13.24% relative WER reductions on AMI corpus compared to a baseline neural network without teacher's hints.Comment: Accepted to 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2018
    corecore