2 research outputs found
A refined non-driving activity classification using a two-stream convolutional neural network
It is of great importance to monitor the driver’s status to achieve an intelligent and safe take-over transition in the level 3 automated driving vehicle. We present a camera-based system to recognise the non-driving activities (NDAs) which may lead to different cognitive capabilities for take-over based on a fusion of spatial and temporal information. The region of interest (ROI) is automatically selected based on the extracted masks of the driver and the object/device interacting with. Then, the RGB image of the ROI (the spatial stream) and its associated current and historical optical flow frames (the temporal stream) are fed into a two-stream convolutional neural network (CNN) for the classification of NDAs. Such an approach is able to identify not only the object/device but also the interaction mode between the object and the driver, which enables a refined NDA classification. In this paper, we evaluated the performance of classifying 10 NDAs with two types of devices (tablet and phone) and 5 types of tasks (emailing, reading, watching videos, web-browsing and gaming) for 10 participants. Results show that the proposed system improves the averaged classification accuracy from 61.0% when using a single spatial stream to 90.5
Driver behaviour characterization using artificial intelligence techniques in level 3 automated vehicle.
Brighton, James L. - Associate SupervisorAutonomous vehicles free drivers from driving and allow them to engage in some
non-driving related activities. However, the engagement in such activities could
reduce their awareness of the driving environment, which could bring a potential
risk for the takeover process in the current automation level of the intelligent
vehicle. Therefore, it is of great importance to monitor the driver's behaviour when
the vehicle is in automated driving mode.
This research aims to develop a computer vision-based driver monitoring system
for autonomous vehicles, which characterises driver behaviour inside the vehicle
cabin by their visual attention and hand movement and proves the feasibility of
using such features to identify the driver's non-driving related activities. This
research further proposes a system, which employs both information to identify
driving related activities and non-driving related activities. A novel deep learning-
based model has been developed for the classification of such activities. A
lightweight model has also been developed for the edge computing device, which
compromises the recognition accuracy but is more suitable for further in-vehicle
applications. The developed models outperform the state-of-the-art methods in
terms of classification accuracy. This research also investigates the impact of the
engagement in non-driving related activities on the takeover process and
proposes a category method to group the activities to improve the extendibility of
the driving monitoring system for unevaluated activities. The finding of this
research is important for the design of the takeover strategy to improve driving
safety during the control transition in Level 3 automated vehicles.PhD in Manufacturin