Policy search can in principle acquire complex strategies for control of
robots and other autonomous systems. When the policy is trained to process raw
sensory inputs, such as images and depth maps, it can also acquire a strategy
that combines perception and control. However, effectively processing such
complex inputs requires an expressive policy class, such as a large neural
network. These high-dimensional policies are difficult to train, especially
when learning to control safety-critical systems. We propose PLATO, an
algorithm that trains complex control policies with supervised learning, using
model-predictive control (MPC) to generate the supervision, hence never in need
of running a partially trained and potentially unsafe policy. PLATO uses an
adaptive training method to modify the behavior of MPC to gradually match the
learned policy in order to generate training samples at states that are likely
to be visited by the learned policy. PLATO also maintains the MPC cost as an
objective to avoid highly undesirable actions that would result from strictly
following the learned policy before it has been fully trained. We prove that
this type of adaptive MPC expert produces supervision that leads to good
long-horizon performance of the resulting policy. We also empirically
demonstrate that MPC can still avoid dangerous on-policy actions in unexpected
situations during training. Our empirical results on a set of challenging
simulated aerial vehicle tasks demonstrate that, compared to prior methods,
PLATO learns faster, experiences substantially fewer catastrophic failures
(crashes) during training, and often converges to a better policy