Human Computation Games (HCGs) aim to engage volunteers to solve information tasks, yet suffer from low sustained engagement themselves. One potential reason for this is limited difficulty balance, as tasks difficulty is unknown and they cannot be freely changed. In this paper, we introduce the use of player rating systems for selecting and sequencing tasks as an approach to difficulty balancing in HCGs and game genres facing similar challenges. We identify the bipartite structure of user-task graphs as a potential issue of our approach: users never directly match users, tasks never match tasks. We therefore test how well common rating systems predict outcomes in bipartite versus non-bipartite chess data sets and log data of the HCG Paradox. Results indicate that bipartiteness does not negatively impact prediction accuracy: common rating systems outperform baseline predictions in HCG data, supporting our approach’s viability. We outline limitations of our approach and future work