523 research outputs found
Bounded Optimal Exploration in MDP
Within the framework of probably approximately correct Markov decision
processes (PAC-MDP), much theoretical work has focused on methods to attain
near optimality after a relatively long period of learning and exploration.
However, practical concerns require the attainment of satisfactory behavior
within a short period of time. In this paper, we relax the PAC-MDP conditions
to reconcile theoretically driven exploration methods and practical needs. We
propose simple algorithms for discrete and continuous state spaces, and
illustrate the benefits of our proposed relaxation via theoretical analyses and
numerical examples. Our algorithms also maintain anytime error bounds and
average loss bounds. Our approach accommodates both Bayesian and non-Bayesian
methods.Comment: In Proceedings of the 30th AAAI Conference on Artificial Intelligence
(AAAI), 201
Deep Learning without Poor Local Minima
In this paper, we prove a conjecture published in 1989 and also partially
address an open problem announced at the Conference on Learning Theory (COLT)
2015. With no unrealistic assumption, we first prove the following statements
for the squared loss function of deep linear neural networks with any depth and
any widths: 1) the function is non-convex and non-concave, 2) every local
minimum is a global minimum, 3) every critical point that is not a global
minimum is a saddle point, and 4) there exist "bad" saddle points (where the
Hessian has no negative eigenvalue) for the deeper networks (with more than
three layers), whereas there is no bad saddle point for the shallow networks
(with three layers). Moreover, for deep nonlinear neural networks, we prove the
same four statements via a reduction to a deep linear model under the
independence assumption adopted from recent work. As a result, we present an
instance, for which we can answer the following question: how difficult is it
to directly train a deep model in theory? It is more difficult than the
classical machine learning models (because of the non-convexity), but not too
difficult (because of the nonexistence of poor local minima). Furthermore, the
mathematically proven existence of bad saddle points for deeper models would
suggest a possible open problem. We note that even though we have advanced the
theoretical foundations of deep learning and non-convex optimization, there is
still a gap between theory and practice.Comment: In NIPS 2016. Selected for NIPS oral presentation (top 2%
submissions). ---- The final NIPS 2016 version: the results remain the sam
Elimination of All Bad Local Minima in Deep Learning
In this paper, we theoretically prove that adding one special neuron per
output unit eliminates all suboptimal local minima of any deep neural network,
for multi-class classification, binary classification, and regression with an
arbitrary loss function, under practical assumptions. At every local minimum of
any deep neural network with these added neurons, the set of parameters of the
original neural network (without added neurons) is guaranteed to be a global
minimum of the original neural network. The effects of the added neurons are
proven to automatically vanish at every local minimum. Moreover, we provide a
novel theoretical characterization of a failure mode of eliminating suboptimal
local minima via an additional theorem and several examples. This paper also
introduces a novel proof technique based on the perturbable gradient basis
(PGB) necessary condition of local minima, which provides new insight into the
elimination of local minima and is applicable to analyze various models and
transformations of objective functions beyond the elimination of local minima.Comment: Accepted to appear in AISTATS 202
Global Continuous Optimization with Error Bound and Fast Convergence
This paper considers global optimization with a black-box unknown objective
function that can be non-convex and non-differentiable. Such a difficult
optimization problem arises in many real-world applications, such as parameter
tuning in machine learning, engineering design problem, and planning with a
complex physics simulator. This paper proposes a new global optimization
algorithm, called Locally Oriented Global Optimization (LOGO), to aim for both
fast convergence in practice and finite-time error bound in theory. The
advantage and usage of the new algorithm are illustrated via theoretical
analysis and an experiment conducted with 11 benchmark test functions. Further,
we modify the LOGO algorithm to specifically solve a planning problem via
policy search with continuous state/action space and long time horizon while
maintaining its finite-time error bound. We apply the proposed planning method
to accident management of a nuclear power plant. The result of the application
study demonstrates the practical utility of our method
Effect of Depth and Width on Local Minima in Deep Learning
In this paper, we analyze the effects of depth and width on the quality of
local minima, without strong over-parameterization and simplification
assumptions in the literature. Without any simplification assumption, for deep
nonlinear neural networks with the squared loss, we theoretically show that the
quality of local minima tends to improve towards the global minimum value as
depth and width increase. Furthermore, with a locally-induced structure on deep
nonlinear neural networks, the values of local minima of neural networks are
theoretically proven to be no worse than the globally optimal values of
corresponding classical machine learning models. We empirically support our
theoretical observation with a synthetic dataset as well as MNIST, CIFAR-10 and
SVHN datasets. When compared to previous studies with strong
over-parameterization assumptions, the results in this paper do not require
over-parameterization, and instead show the gradual effects of
over-parameterization as consequences of general results
Every Local Minimum Value is the Global Minimum Value of Induced Model in Non-convex Machine Learning
For nonconvex optimization in machine learning, this article proves that
every local minimum achieves the globally optimal value of the perturbable
gradient basis model at any differentiable point. As a result, nonconvex
machine learning is theoretically as supported as convex machine learning with
a handcrafted basis in terms of the loss at differentiable local minima, except
in the case when a preference is given to the handcrafted basis over the
perturbable gradient basis. The proofs of these results are derived under mild
assumptions. Accordingly, the proven results are directly applicable to many
machine learning models, including practical deep neural networks, without any
modification of practical methods. Furthermore, as special cases of our general
results, this article improves or complements several state-of-the-art
theoretical results on deep neural networks, deep residual networks, and
overparameterized deep neural networks with a unified proof technique and novel
geometric insights. A special case of our results also contributes to the
theoretical foundation of representation learning.Comment: Neural computation, MIT pres
Generalization in Deep Learning
This paper provides theoretical insights into why and how deep learning can
generalize well, despite its large capacity, complexity, possible algorithmic
instability, nonrobustness, and sharp minima, responding to an open question in
the literature. We also discuss approaches to provide non-vacuous
generalization guarantees for deep learning. Based on theoretical observations,
we propose new open problems and discuss the limitations of our results.Comment: To appear in Mathematics of Deep Learning, Cambridge University
Press. All previous results remain unchange
- …