We use AI agents to play successive design iterations of an analogue board game to understand the sorts of question a designer asks of a game, and how AI play-testing approaches can help answer these questions and reduce the need for time-consuming human play-testing. Our case study supports the view that AI play-testing can complement human testing, but can certainly not replace it. A core issue to be addressed is the extent to which the designer trusts the results of AI play-testing as sufficiently human-like. The majority of design changes are inspired from human play-testing, but AI play-testing helpfully complements these and often gave the designer the confidence to make changes faster where AI and humans 'agreed'