The influence of atmospheric turbulence on acquired surveillance imagery
poses significant challenges in image interpretation and scene analysis.
Conventional approaches for target classification and tracking are less
effective under such conditions. While deep-learning-based object detection
methods have shown great success in normal conditions, they cannot be directly
applied to atmospheric turbulence sequences. In this paper, we propose a novel
framework that learns distorted features to detect and classify object types in
turbulent environments. Specifically, we utilise deformable convolutions to
handle spatial turbulent displacement. Features are extracted using a feature
pyramid network, and Faster R-CNN is employed as the object detector.
Experimental results on a synthetic VOC dataset demonstrate that the proposed
framework outperforms the benchmark with a mean Average Precision (mAP) score
exceeding 30%. Additionally, subjective results on real data show significant
improvement in performance