Two-Dimensional Positioning with Machine Learning in Virtual and Real Environments

Abstract

In this paper, a ball-on-plate control system driven only by a neural network agent is presented. Apart from reinforcement learning, no other control solution or support was applied. The implemented device, driven by two servo motors, learned by itself through thousands of iterations how to keep the ball in the center of the resistive sensor. We compared the real-world performance of agents trained in both a real-world and in a virtual environment. We also examined the efficacy of a virtually pre-trained agent fine-tuned in the real environment. The obtained results were evaluated and compared to see which approach makes a good basis for the implementation of a control task implemented purely with a neural network

    Similar works