Reinforcement Learning from Human Feedback (RLHF) has played a crucial role
in the success of large models such as ChatGPT. RLHF is a reinforcement
learning framework which combines human feedback to improve learning
effectiveness and performance. However, obtaining preferences feedback manually
is quite expensive in commercial applications. Some statistical commercial
indicators are usually more valuable and always ignored in RLHF. There exists a
gap between commercial target and model training. In our research, we will
attempt to fill this gap with statistical business feedback instead of human
feedback, using AB testing which is a well-established statistical method.
Reinforcement Learning from Statistical Feedback (RLSF) based on AB testing is
proposed. Statistical inference methods are used to obtain preferences for
training the reward network, which fine-tunes the pre-trained model in
reinforcement learning framework, achieving greater business value.
Furthermore, we extend AB testing with double selections at a single time-point
to ANT testing with multiple selections at different feedback time points.
Moreover, we design numerical experiences to validate the effectiveness of our
algorithm framework