Perceptual adaptation to a talker allows listeners to efficiently resolve inherent ambiguities present in the speech signal introduced by the lack of a one-to-one mapping between acoustic signals and intended phonemic categories across talkers. In ideal listening environments, preceding speech context has been found to enhance perceptual adaptation to a talker. However, little is known regarding how perceptual adaptation to speech occurs in more realistic listening environments with background noise. The current investigation explored how talker variability and preceding speech context affect identification of phonetically-confusable words in adverse listening conditions. Our results showed that listeners were less accurate and slower in identifying mixed-talker speech compared to single-talker speech when target words were presented in multi-talker babble, and that preceding speech context enhanced word identification performance under noise both in single- and mixed talker conditions. These results extend previous findings of perceptual adaptation to talker-specific speech in quiet environments, suggesting that the same underlying mechanisms may serve to perceptually adapt to speech both in quiet and in noise. Both cognitive and attentional mechanisms were proposed to jointly underlie perceptual adaptation to speech, including an active control process that preallocates cognitive resources to processing talker variability and auditory streaming processes that support successful feedforward allocation of attention to salient talker-specific features