Automatically predicting the outcome of subjective listening tests is a
challenging task. Ratings may vary from person to person even if preferences
are consistent across listeners. While previous work has focused on predicting
listeners' ratings (mean opinion scores) of individual stimuli, we focus on the
simpler task of predicting subjective preference given two speech stimuli for
the same text. We propose a model based on anti-symmetric twin neural networks,
trained on pairs of waveforms and their corresponding preference scores. We
explore both attention and recurrent neural nets to account for the fact that
stimuli in a pair are not time aligned. To obtain a large training set we
convert listeners' ratings from MUSHRA tests to values that reflect how often
one stimulus in the pair was rated higher than the other. Specifically, we
evaluate performance on data obtained from twelve MUSHRA evaluations conducted
over five years, containing different TTS systems, built from data of different
speakers. Our results compare favourably to a state-of-the-art model trained to
predict MOS scores