Safe Driving using Vision-based Hand Gesture Recognition System in Non-uniform Illumination Conditions

Abstract

Nowadays, there is tremendous growth in in-car interfaces for driver safety and comfort, but controlling these devices while driving requires the driver's attention. One of the solutions to reduce the number of glances at these interfaces is to design an advanced driver assistance system (ADAS). A vision-based touch-less hand gesture recognition system is proposed here for in-car human-machine interfaces (HMI). The performance of such systems is unreliable under ambient illumination conditions, which change during the course of the day. Thus, the main focus of this work was to design a system that is robust towards changing lighting conditions. For this purpose, a homomorphic filter with adaptive thresholding binarization is used. Also, gray-level edge-based segmentation ensures that it is generalized for users of different skin tones and background colors. This work was validated on selected gestures from the Cambridge Hand Gesture Database captured in five sets of non-uniform illumination conditions that closely resemble in-car illumination conditions, yielding an overall system accuracy of 91%, an average frame-by-frame accuracy of 81.38%, and a latency of 3.78 milliseconds. A prototype of the proposed system was implemented on a Raspberry Pi 3 interface together with an Android application, which demonstrated its suitability for non-critical in-car interfaces like infotainment systems

    Similar works