1,131,129 research outputs found
Curriculum Learning by Transfer Learning: Theory and Experiments with Deep Networks
We provide theoretical investigation of curriculum learning in the context of
stochastic gradient descent when optimizing the convex linear regression loss.
We prove that the rate of convergence of an ideal curriculum learning method is
monotonically increasing with the difficulty of the examples. Moreover, among
all equally difficult points, convergence is faster when using points which
incur higher loss with respect to the current hypothesis. We then analyze
curriculum learning in the context of training a CNN. We describe a method
which infers the curriculum by way of transfer learning from another network,
pre-trained on a different task. While this approach can only approximate the
ideal curriculum, we observe empirically similar behavior to the one predicted
by the theory, namely, a significant boost in convergence speed at the
beginning of training. When the task is made more difficult, improvement in
generalization performance is also observed. Finally, curriculum learning
exhibits robustness against unfavorable conditions such as excessive
regularization.Comment: ICML 201
A Theory Explains Deep Learning
This is our journal for developing Deduction Theory and studying Deep Learning and Artificial intelligence. Deduction Theory is a Theory of Deducing World’s Relativity by Information Coupling and Asymmetry. We focus on information processing, see intelligence as an information structure that relatively close object-oriented, probability-oriented, unsupervised learning, relativity information processing and massive automated information processing. We see deep learning and machine learning as an attempt to make all types of information processing relatively close to probability information processing. We will discuss about how to understand Deep Learning and Artificial intelligence and why Deep Learning is shown better performance than the other methods by metaphysical logic
- …
