2 research outputs found
Sampling Through the Lens of Sequential Decision Making
Sampling is ubiquitous in machine learning methodologies. Due to the growth
of large datasets and model complexity, we want to learn and adapt the sampling
process while training a representation. Towards achieving this grand goal, a
variety of sampling techniques have been proposed. However, most of them either
use a fixed sampling scheme or adjust the sampling scheme based on simple
heuristics. They cannot choose the best sample for model training in different
stages. Inspired by "Think, Fast and Slow" (System 1 and System 2) in cognitive
science, we propose a reward-guided sampling strategy called Adaptive Sample
with Reward (ASR) to tackle this challenge. To the best of our knowledge, this
is the first work utilizing reinforcement learning (RL) to address the sampling
problem in representation learning. Our approach optimally adjusts the sampling
process to achieve optimal performance. We explore geographical relationships
among samples by distance-based sampling to maximize overall cumulative reward.
We apply ASR to the long-standing sampling problems in similarity-based loss
functions. Empirical results in information retrieval and clustering
demonstrate ASR's superb performance across different datasets. We also discuss
an engrossing phenomenon which we name as "ASR gravity well" in experiments
Demystify the Gravity Well in the Optimization Landscape (Student Abstract)
We provide both empirical and theoretical insights to demystify the gravity well phenomenon in the optimization landscape. We start from describe the problem setup and theoretical results (an escape time lower bound) of the Softmax Gravity Well (SGW) in the literature. Then we move toward the understanding of a recent observation called ASR gravity well. We provide an explanation of why normal distribution with high variance can lead to suboptimal plateaus from an energy function point of view. We also contribute to the empirical insights of curriculum learning by comparison of policy initialization by different normal distributions. Furthermore, we provide the ASR escape time lower bound to understand the ASR gravity well theoretically. Future work includes more specific modeling of the reward as a function of time and quantitative evaluation of normal distribution’s influence on policy initialization