We present a new scheme for cleaning a sample corrupted by labeling noise. Our scheme is universal in the sense that we only make general assumptions on the dual learning problem and therefore it is completely detached from the specifics of the primal problem itself. In a nutshell, we turn to the dual learning problem to exploit valuable information about the underlying structure of the primal one, which in turn provides the means to device a simple "noise cleaning" mechanism, using Membership Queries. We demonstrate the strength and applicability of the suggested method with a few learning problems of different nature. Of particular interest is the problem of learning in the restricted class of parity functions, where only k out of n bits are active. We show that in the MQ model we can outperforme the recent result by Blum et al.  and handle k = O (n \Gamma c log (n) log log (n)). This also provides a sharp separation between our method and the SQ model. The suggested..