As a result of massive digitization efforts and the world wide
web, there is an exploding amount of available digital data describing
and representing music at various semantic levels and in diverse formats.
For example, in the case of the Beatles songs, there are numerous recordings
including an increasing number of cover songs and arrangements as well
as MIDI data and other symbolic music representations. The general
goal of music synchronization is to align the multiple information sources
related to a given piece of music. This becomes a difficult problem when
the various representations reveal significant differences in structure and
polyphony, while exhibiting various types of artifacts. In this paper, we
address the issue of how music synchronization techniques are useful
for automatically revealing critical passages with significant difference
between the two versions to be aligned. Using the corpus of the Beatles
songs as test bed, we analyze the kind of differences occurring in audio
and MIDI versions available for the song