Speech comprehension in a noisy environment requires active cognitive control mechanisms to select the relevant speech signal while filtering out irrelevant distractions. When processing speech in a multitask scenario, neural resources underlying cognitive control are considerably burdened and interfering information becomes more difficult to ignore. The present study utilized magnetoencephalography (MEG) to investigate the impact of multitasking on selective attention to speech. Twenty healthy adults performed a multitask paradigm with varying levels of both competing auditory distraction and concurrent visual working memory load. While increased visual working memory load was associated with reduced selective attention to speech in both the presence and absence of competing distraction, auditory distraction alone did not hamper accurate detection of the target speech. Under lower visual working memory load, left temporal regions entrained more strongly to attended speech in the absence of auditory distraction, while entrainment of right frontal structures emerged when selectively attending amid competing speech. At these lower levels of cognitive demand, entrainment to the attended speech occurred at latencies corresponding to temporal variations of the speech envelope (300-400 ms). In contrast, neural entrainment to ignored speech occurred at earlier latencies (~50 ms) and was evident in left parietal regions under lower working memory demands, as well as in left temporal cortex when compared to the attended speech steam. Taken together, the present findings provide evidence for both top-down neural enhancement and suppression mechanisms subserving selective attention to speech while multitasking. The present results further demonstrate that both mechanisms for neural enhancement and suppression are modulated by concurrent task load. Such findings provide a foundation for investigating impairments in cognitive control and selective attention to speech associated with normal aging and neurological disease