24 research outputs found
A Method for Learning from Hints
We address the problem of learning an unknown function by
putting together several pieces of information (hints) that we know about the function. We introduce a method that generalizes learning from examples to learning from hints. A canonical representation of hints is defined and illustrated for new types of hints. All the hints are represented to the learning process by examples, and
examples of the function are treated on equal footing with the rest of the hints. During learning, examples from different hints are selected for processing according to a given schedule. We present two types of schedules; fixed schedules that specify the relative emphasis of each hint, and adaptive schedules that are based on how well each hint has been learned so far. Our learning method is compatible with any descent technique that we may choose to use
BridgeNets: Student-Teacher Transfer Learning Based on Recursive Neural Networks and its Application to Distant Speech Recognition
Despite the remarkable progress achieved on automatic speech recognition,
recognizing far-field speeches mixed with various noise sources is still a
challenging task. In this paper, we introduce novel student-teacher transfer
learning, BridgeNet which can provide a solution to improve distant speech
recognition. There are two key features in BridgeNet. First, BridgeNet extends
traditional student-teacher frameworks by providing multiple hints from a
teacher network. Hints are not limited to the soft labels from a teacher
network. Teacher's intermediate feature representations can better guide a
student network to learn how to denoise or dereverberate noisy input. Second,
the proposed recursive architecture in the BridgeNet can iteratively improve
denoising and recognition performance. The experimental results of BridgeNet
showed significant improvements in tackling the distant speech recognition
problem, where it achieved up to 13.24% relative WER reductions on AMI corpus
compared to a baseline neural network without teacher's hints.Comment: Accepted to 2018 IEEE International Conference on Acoustics, Speech
and Signal Processing (ICASSP 2018
Hints
The systematic use of hints in the learning-from-examples paradigm is the subject of this review. Hints are the properties of the target function that are known to us independently of the training examples. The use of hints is tantamount to combining rules and data in learning, and is compatible with different learning models, optimization techniques, and regularization techniques. The hints are represented to the learning process by virtual examples, and the training examples of the target function are treated on equal footing with the rest of the hints. A balance is achieved between the information provided by the different hints through the choice of objective functions and learning schedules. The Adaptive Minimization algorithm achieves this balance by relating the performance on each hint to the overall performance. The application of hints in forecasting the very noisy foreign-exchange markets is illustrated. On the theoretical side, the information value of hints is contrasted to the complexity value and related to the VC dimension
Financial Applications of Learning from Hints
The basic paradigm for learning in neural networks is 'learning from examples' where a training set of input-output examples is used to teach the network the target function. Learning from hints is a generalization
of learning from examples where additional information
about the target function can be incorporated in the same learning process. Such information can come from common sense rules or special expertise. In financial market applications where the training data is very noisy, the use of such hints can have a decisive advantage. We demonstrate the use of hints in foreign-exchange trading of the U.S. Dollar versus the British Pound, the German
Mark, the Japanese Yen, and the Swiss Franc, over a period of 32 months. We explain the general method of learning from hints and how it can be applied to other markets. The learning model for this method is not restricted to neural networks
A Model of Inductive Bias Learning
A major problem in machine learning is that of inductive bias: how to choose
a learner's hypothesis space so that it is large enough to contain a solution
to the problem being learnt, yet small enough to ensure reliable generalization
from reasonably-sized training sets. Typically such bias is supplied by hand
through the skill and insights of experts. In this paper a model for
automatically learning bias is investigated. The central assumption of the
model is that the learner is embedded within an environment of related learning
tasks. Within such an environment the learner can sample from multiple tasks,
and hence it can search for a hypothesis space that contains good solutions to
many of the problems in the environment. Under certain restrictions on the set
of all hypothesis spaces available to the learner, we show that a hypothesis
space that performs well on a sufficiently large number of training tasks will
also perform well when learning novel tasks in the same environment. Explicit
bounds are also derived demonstrating that learning multiple tasks within an
environment of related tasks can potentially give much better generalization
than learning a single task