167 research outputs found
Every Local Minimum Value is the Global Minimum Value of Induced Model in Non-convex Machine Learning
For nonconvex optimization in machine learning, this article proves that
every local minimum achieves the globally optimal value of the perturbable
gradient basis model at any differentiable point. As a result, nonconvex
machine learning is theoretically as supported as convex machine learning with
a handcrafted basis in terms of the loss at differentiable local minima, except
in the case when a preference is given to the handcrafted basis over the
perturbable gradient basis. The proofs of these results are derived under mild
assumptions. Accordingly, the proven results are directly applicable to many
machine learning models, including practical deep neural networks, without any
modification of practical methods. Furthermore, as special cases of our general
results, this article improves or complements several state-of-the-art
theoretical results on deep neural networks, deep residual networks, and
overparameterized deep neural networks with a unified proof technique and novel
geometric insights. A special case of our results also contributes to the
theoretical foundation of representation learning.Comment: Neural computation, MIT pres
Regret bounds for meta Bayesian optimization with an unknown Gaussian process prior
Bayesian optimization usually assumes that a Bayesian prior is given.
However, the strong theoretical guarantees in Bayesian optimization are often
regrettably compromised in practice because of unknown parameters in the prior.
In this paper, we adopt a variant of empirical Bayes and show that, by
estimating the Gaussian process prior from offline data sampled from the same
prior and constructing unbiased estimators of the posterior, variants of both
GP-UCB and probability of improvement achieve a near-zero regret bound, which
decreases to a constant proportional to the observational noise as the number
of offline data and the number of online evaluations increase. Empirically, we
have verified our approach on challenging simulated robotic problems featuring
task and motion planning.Comment: Proceedings of the Thirty-second Conference on Neural Information
Processing Systems, 201
Provably Safe Robot Navigation with Obstacle Uncertainty
As drones and autonomous cars become more widespread it is becoming
increasingly important that robots can operate safely under realistic
conditions. The noisy information fed into real systems means that robots must
use estimates of the environment to plan navigation. Efficiently guaranteeing
that the resulting motion plans are safe under these circumstances has proved
difficult. We examine how to guarantee that a trajectory or policy is safe with
only imperfect observations of the environment. We examine the implications of
various mathematical formalisms of safety and arrive at a mathematical notion
of safety of a long-term execution, even when conditioned on observational
information. We present efficient algorithms that can prove that trajectories
or policies are safe with much tighter bounds than in previous work. Notably,
the complexity of the environment does not affect our methods ability to
evaluate if a trajectory or policy is safe. We then use these safety checking
methods to design a safe variant of the RRT planning algorithm.Comment: RSS 201
PDDLStream: Integrating Symbolic Planners and Blackbox Samplers via Optimistic Adaptive Planning
Many planning applications involve complex relationships defined on
high-dimensional, continuous variables. For example, robotic manipulation
requires planning with kinematic, collision, visibility, and motion constraints
involving robot configurations, object poses, and robot trajectories. These
constraints typically require specialized procedures to sample satisfying
values. We extend PDDL to support a generic, declarative specification for
these procedures that treats their implementation as black boxes. We provide
domain-independent algorithms that reduce PDDLStream problems to a sequence of
finite PDDL problems. We also introduce an algorithm that dynamically balances
exploring new candidate plans and exploiting existing ones. This enables the
algorithm to greedily search the space of parameter bindings to more quickly
solve tightly-constrained problems as well as locally optimize to produce
low-cost solutions. We evaluate our algorithms on three simulated robotic
planning domains as well as several real-world robotic tasks.Comment: International Conference on Automated Planning and Scheduling (ICAPS)
202
- …