Replay methods have shown to be successful in mitigating catastrophic
forgetting in continual learning scenarios despite having limited access to
historical data. However, storing historical data is cheap in many real-world
applications, yet replaying all historical data would be prohibited due to
processing time constraints. In such settings, we propose learning the time to
learn for a continual learning system, in which we learn replay schedules over
which tasks to replay at different time steps. To demonstrate the importance of
learning the time to learn, we first use Monte Carlo tree search to find the
proper replay schedule and show that it can outperform fixed scheduling
policies in terms of continual learning performance. Moreover, to improve the
scheduling efficiency itself, we propose to use reinforcement learning to learn
the replay scheduling policies that can generalize to new continual learning
scenarios without added computational cost. In our experiments, we show the
advantages of learning the time to learn, which brings current continual
learning research closer to real-world needs