Distortion is widely existed in the images captured by popular wide-angle
cameras and fisheye cameras. Despite the long history of distortion
rectification, accurately estimating the distortion parameters from a single
distorted image is still challenging. The main reason is these parameters are
implicit to image features, influencing the networks to fully learn the
distortion information. In this work, we propose a novel distortion
rectification approach that can obtain more accurate parameters with higher
efficiency. Our key insight is that distortion rectification can be cast as a
problem of learning an ordinal distortion from a single distorted image. To
solve this problem, we design a local-global associated estimation network that
learns the ordinal distortion to approximate the realistic distortion
distribution. In contrast to the implicit distortion parameters, the proposed
ordinal distortion have more explicit relationship with image features, and
thus significantly boosts the distortion perception of neural networks.
Considering the redundancy of distortion information, our approach only uses a
part of distorted image for the ordinal distortion estimation, showing
promising applications in the efficient distortion rectification. To our
knowledge, we first unify the heterogeneous distortion parameters into a
learning-friendly intermediate representation through ordinal distortion,
bridging the gap between image feature and distortion rectification. The
experimental results demonstrate that our approach outperforms the
state-of-the-art methods by a significant margin, with approximately 23%
improvement on the quantitative evaluation while displaying the best
performance on visual appearance