It is hard to test autonomous robot (AR) software because of the range and
diversity of external situations (terrain, obstacles, humans, peer robots) that
AR must deal with. Common measures of testing adequacy may not address this
diversity. Explicit situation coverage has been proposed as a solution, but
there has been little empirical study of its effectiveness. In this paper, we
describe an implementation of situation coverage for testing a simple simulated
autonomous road vehicle, and evaluate its ability to find seeded faults
compared to a random test generation approach. In our experiments, the
performance of the two methods is similar, with situation coverage having a
very slight advantage. We conclude that situation coverage probably does not
have a significant benefit over random generation for the type of simple,
research-grade AR software used here. It will likely be valuable when applied
to more complex and mature software