The goal of image oversegmentation is to divide an image into several pieces,
each of which should ideally be part of an object. One of the simplest and yet
most effective oversegmentation algorithms is known as local variation (LV)
(Felzenszwalb and Huttenlocher 2004). In this work, we study this algorithm and
show that algorithms similar to LV can be devised by applying different
statistical models and decisions, thus providing further theoretical
justification and a well-founded explanation for the unexpected high
performance of the LV approach. Some of these algorithms are based on
statistics of natural images and on a hypothesis testing decision; we denote
these algorithms probabilistic local variation (pLV). The best pLV algorithm,
which relies on censored estimation, presents state-of-the-art results while
keeping the same computational complexity of the LV algorithm