2,606 research outputs found
Recommended from our members
Algorithms for First-order Sparse Reinforcement Learning
This thesis presents a general framework for first-order temporal difference learning algorithms with an in-depth theoretical analysis. The main contribution of the thesis is the development and design of a family of first-order regularized temporal-difference (TD) algorithms using stochastic approximation and stochastic optimization. To scale up TD algorithms to large-scale problems, we use first-order optimization to explore regularized TD methods using linear value function approximation. Previous regularized TD methods often use matrix inversion, which requires cubic time and quadratic memory complexity. We propose two algorithms, sparse-Q and RO-TD, for on-policy and off-policy learning, respectively. These two algorithms exhibit linear computational complexity per-step, and their asymptotic convergence guarantee and error bound analysis are given using stochastic optimization and stochastic approximation. The second major contribution of the thesis is the establishment of a unified general framework for stochastic-gradient-based temporal-difference learning algorithms that use proximal gradient methods. The primal-dual saddle-point formulation is introduced, and state-of-the-art stochastic gradient solvers, such as mirror descent and extragradient are used to design several novel RL algorithms. Theoretical analysis is given, including regularization, acceleration analysis and finite-sample analysis, along with detailed empirical experiments to demonstrate the effectiveness of the proposed algorithms
The importance of better models in stochastic optimization
Standard stochastic optimization methods are brittle, sensitive to stepsize
choices and other algorithmic parameters, and they exhibit instability outside
of well-behaved families of objectives. To address these challenges, we
investigate models for stochastic minimization and learning problems that
exhibit better robustness to problem families and algorithmic parameters. With
appropriately accurate models---which we call the aProx family---stochastic
methods can be made stable, provably convergent and asymptotically optimal;
even modeling that the objective is nonnegative is sufficient for this
stability. We extend these results beyond convexity to weakly convex
objectives, which include compositions of convex losses with smooth functions
common in modern machine learning applications. We highlight the importance of
robustness and accurate modeling with a careful experimental evaluation of
convergence time and algorithm sensitivity
- …