We propose a general and versatile framework that significantly speeds-up graphical model optimization while maintaining an excellent solution accuracy. The
proposed approach, refereed as Inference by Learning or in short as IbyL, relies on a multi-scale pruning scheme that progressively reduces the solution space by
use of a coarse-to-fine cascade of learnt classifiers. We thoroughly experiment with classic computer vision related MRF problems, where our novel framework
constantly yields a significant time speed-up (with respect to the most efficient inference methods) and obtains a more accurate solution than directly optimizing
the MRF. We make our code available on-line [4]