2 research outputs found

    ALL IN ONE NETWORK FOR DRIVER ATTENTION MONITORING

    Get PDF
    Nowadays, driver drowsiness and driver distraction is considered as a major risk for fatal road accidents around the world. As a result, driver monitoring identifying is emerging as an essential function of automotive safety systems. Its basic features include head pose, gaze direction, yawning and eye state analysis. However, existing work has investigated algorithms to detect these tasks separately and was usually conducted under laboratory environments. To address this problem, we propose a multi-task learning CNN framework which simultaneously solve these tasks. The network is implemented by sharing common features and parameters of highly related tasks. Moreover, we propose Dual-Loss Block to decompose the pose estimation task into pose classification and coarse-to-fine regression and Objectcentric Aware Block to reduce orientation estimation errors. Thus, with such novel designs, our model not only achieves SOA results but also reduces the complexity of integrating into automotive safety systems. It runs at 10 fps on vehicle embedded systems which marks a momentous step for this field. More importantly, to facilitate other researchers, we publish our dataset FDUDrivers which contains 20000 images of 100 different drivers and covers various real driving environments. FDUDrivers might be the first comprehensive dataset regarding driver attention monitorin

    Can adas distract driver’s attention? An rgb-d camera and deep learning-based analysis

    Get PDF
    Driver inattention is the primary cause of vehicle accidents; hence, manufacturers have introduced systems to support the driver and improve safety; nonetheless, advanced driver assistance systems (ADAS) must be properly designed not to become a potential source of distraction for the driver due to the provided feedback. In the present study, an experiment involving auditory and haptic ADAS has been conducted involving 11 participants, whose attention has been monitored during their driving experience. An RGB-D camera has been used to acquire the drivers’ face data. Subsequently, these images have been analyzed using a deep learning-based approach, i.e., a convolutional neural network (CNN) specifically trained to perform facial expression recognition (FER). Analyses to assess possible relationships between these results and both ADAS activations and event occurrences, i.e., accidents, have been carried out. A correlation between attention and accidents emerged, whilst facial expressions and ADAS activations resulted to be not correlated, thus no evidence that the designed ADAS are a possible source of distraction has been found. In addition to the experimental results, the proposed approach has proved to be an effective tool to monitor the driver through the usage of non-invasive techniques
    corecore