A comparison of feature extractors for panorama stitching in an autonomous car architecture.

Abstract

Panorama stitching consists on frames being put together to create a 360o view. This technique is proposed for its implementation in autonomous vehicles instead of the use of an external 360o camera, mostly due to its reduced cost and improved aerodynamics. This strategy requires a fast and robust set of features to be extracted from the images obtained by the cameras located around the inside of the car, in order to effectively compute the panoramic view in real time and avoid hazards on the road. In this paper, we compare and discuss three feature extraction methods (i.e. SIFT, BRISK and SURF) for image feature extraction, in order to decide which one is more suitable for a panorama stitching application in an autonomous car architecture. Experimental validation shows that SURF exhibits an improved performance under a variety of image transformations, and thus appears to be the most suitable of these three methods, given its accuracy when comparing features between both images, while maintaining a low time consumption. Furthermore, a comparison of the results obtained with respect to similar work allows to increase the reliability of our methodology and the reach of our conclusions

    Similar works