7 research outputs found

    End-to-End Multiview Gesture Recognition for Autonomous Car Parking System

    Get PDF
    The use of hand gestures can be the most intuitive human-machine interaction medium. The early approaches for hand gesture recognition used device-based methods. These methods use mechanical or optical sensors attached to a glove or markers, which hinders the natural human-machine communication. On the other hand, vision-based methods are not restrictive and allow for a more spontaneous communication without the need of an intermediary between human and machine. Therefore, vision gesture recognition has been a popular area of research for the past thirty years. Hand gesture recognition finds its application in many areas, particularly the automotive industry where advanced automotive human-machine interface (HMI) designers are using gesture recognition to improve driver and vehicle safety. However, technology advances go beyond active/passive safety and into convenience and comfort. In this context, one of America’s big three automakers has partnered with the Centre of Pattern Analysis and Machine Intelligence (CPAMI) at the University of Waterloo to investigate expanding their product segment through machine learning to provide an increased driver convenience and comfort with the particular application of hand gesture recognition for autonomous car parking. In this thesis, we leverage the state-of-the-art deep learning and optimization techniques to develop a vision-based multiview dynamic hand gesture recognizer for self-parking system. We propose a 3DCNN gesture model architecture that we train on a publicly available hand gesture database. We apply transfer learning methods to fine-tune the pre-trained gesture model on a custom-made data, which significantly improved the proposed system performance in real world environment. We adapt the architecture of the end-to-end solution to expand the state of the art video classifier from a single image as input (fed by monocular camera) to a multiview 360 feed, offered by a six cameras module. Finally, we optimize the proposed solution to work on a limited resources embedded platform (Nvidia Jetson TX2) that is used by automakers for vehicle-based features, without sacrificing the accuracy robustness and real time functionality of the system

    Deep Learning-based Driver Behavior Modeling and Analysis

    Get PDF
    Driving safety continues receiving widespread attention from car designers, safety regulators, and automotive research community as driving accidents due to driver distraction or fatigue have increased drastically over the years. In the past decades, there has been a remarkable push towards designing and developing new driver assistance systems with much better recognition and prediction capabilities. Equipped with various sensory systems, these Advanced Driver Assistance Systems (ADAS) are able to accurately perceive information on road conditions, predict traffic situations, estimate driving risks, and provide drivers with imminent warnings and visual assistance. In this thesis, we focus on two main aspects of driver behavior modeling in the design of new generation of ADAS. We first aim at improving the generalization ability of driver distraction recognition systems to diverse driving scenarios using the latest tools of machine learning and connectionist modeling, namely deep learning. To this end, we collect a large dataset of images on various driving situations of drivers from the Internet. Then we introduce Generative Adversarial Networks (GANs) as a data augmentation tool to enhance detection accuracy. A novel driver monitoring system is also introduced. This monitoring system combines multi-information resources, including a driver distraction recognition system, to assess the danger levels of driving situations. Moreover, this thesis proposes a multi-modal system for distraction recognition under various lighting conditions and presents a new Convolutional Neural Network (CNN) architecture, which can operate real-time on a resources-limited computational platform. The new CNN is built upon a novel network bottleneck of Depthwise Separable Convolution layers. The second part of this thesis focuses on driver maneuver prediction, which infers the direction a driver will turn to before a green traffic light is on and predicts accurately whether or not he/she will change the current driving lane. Here, a new method to label driving maneuver records is proposed, by which driving feature sequences for the training of prediction systems are more closely related to their labels. To this end, a new prediction system, which is based on Quasi-Recurrent Neural Networks, is introduced. In addition, and as an application of maneuver prediction, a novel driving proficiency assessment method is proposed. This method exploits the generalization abilities of different maneuver prediction systems to estimate drivers' driving abilities, and it demonstrates several advantages against existing assessment methods. In conjunction with the theoretical contribution, a series of comprehensive experiments are conducted, and the proposed methods are assessed against state-of-the-art works. The analysis of experimental results shows the improvement of results as compared with existing techniques

    Context-aware intelligent decisions: online assessment of heavy goods vehicle driving risk

    Get PDF
    There is a growing interest in assessing the impact of drivers' actions and behaviours on road safety due to the numerous road fatalities and costs attributed to them. For Heavy Goods Vehicle (HGV) drivers, assessing the road safety risks of their behaviours is a subject of interest for researchers, governments and transport companies, as nations rely on HGVs for the delivery of goods and services. However, HGV driving is a complex, dynamic, uncertain and multifaceted task, mostly influenced by individual traits and external contextual factors. Advanced computational and artificial intelligence (AI) methods have provided promising solutions to automatically characterise the manner by which drivers operate vehicle controls and assess their impact on road safety. However, several challenges and limitations are faced by the current intelligence-supported driving risk assessment approaches proposed by researchers, such as: (1) the lack of comprehensive driving risk datasets; (2) information about the impact of inevitable contextual factors on HGV drivers' responses is not considered, such as drivers' physical and mental states, weather conditions, traffic conditions, road geometry, road types, and work schedules; (3) ambiguity in the definition of driving behaviours is not considered; and (4) imprecision of AI models, and variability in experts' subjective views are not considered. To overcome the aforementioned challenges and limitations, this multidisciplinary research aims at exploring multiple sources of data including information about the impact of contextual factors captured from crucial stakeholders in the HGV sector to develop a reliable context-aware driving risk assessment framework. To achieve this aim, AI methods are explored to accurately detect drivers' driving styles, affective states and driving postures using telematics data, facial images, and driver posture images respectively. Subsequently, due to the lack of comprehensive driving risk datasets, fuzzy expert systems (FESs) are explored to fuse detected driving behaviours and perceived external factors using knowledge from domain experts. The key findings of this research are: (1) recurrent neural networks are effective in capturing the temporal dynamics and differences between the different types of driver distraction postures and affective states; (2) there is a trade-off between efficiency and privacy in processing facial images using AI approaches; (3) the fusion of driver behaviours and external factors using FESs produces realistic, reliable and fair driving risk assessments; and (4) a hierarchical representation of a decision-making process simplifies reasoning compared to flat representations

    A Context Aware Classification System for Monitoring Driver’s Distraction Levels

    Get PDF
    Understanding the safety measures regarding developing self-driving futuristic cars is a concern for decision-makers, civil society, consumer groups, and manufacturers. The researchers are trying to thoroughly test and simulate various driving contexts to make these cars fully secure for road users. Including the vehicle’ surroundings offer an ideal way to monitor context-aware situations and incorporate the various hazards. In this regard, different studies have analysed drivers’ behaviour under different case scenarios and scrutinised the external environment to obtain a holistic view of vehicles and the environment. Studies showed that the primary cause of road accidents is driver distraction, and there is a thin line that separates the transition from careless to dangerous. While there has been a significant improvement in advanced driver assistance systems, the current measures neither detect the severity of the distraction levels nor the context-aware, which can aid in preventing accidents. Also, no compact study provides a complete model for transitioning control from the driver to the vehicle when a high degree of distraction is detected. The current study proposes a context-aware severity model to detect safety issues related to driver’s distractions, considering the physiological attributes, the activities, and context-aware situations such as environment and vehicle. Thereby, a novel three-phase Fast Recurrent Convolutional Neural Network (Fast-RCNN) architecture addresses the physiological attributes. Secondly, a novel two-tier FRCNN-LSTM framework is devised to classify the severity of driver distraction. Thirdly, a Dynamic Bayesian Network (DBN) for the prediction of driver distraction. The study further proposes the Multiclass Driver Distraction Risk Assessment (MDDRA) model, which can be adopted in a context-aware driving distraction scenario. Finally, a 3-way hybrid CNN-DBN-LSTM multiclass degree of driver distraction according to severity level is developed. In addition, a Hidden Markov Driver Distraction Severity Model (HMDDSM) for the transitioning of control from the driver to the vehicle when a high degree of distraction is detected. This work tests and evaluates the proposed models using the multi-view TeleFOT naturalistic driving study data and the American University of Cairo dataset (AUCD). The evaluation of the developed models was performed using cross-correlation, hybrid cross-correlations, K-Folds validation. The results show that the technique effectively learns and adopts safety measures related to the severity of driver distraction. In addition, the results also show that while a driver is in a dangerous distraction state, the control can be shifted from driver to vehicle in a systematic manner

    Context-aware intelligent decisions: online assessment of heavy goods vehicle driving risk

    Get PDF
    There is a growing interest in assessing the impact of drivers' actions and behaviours on road safety due to the numerous road fatalities and costs attributed to them. For Heavy Goods Vehicle (HGV) drivers, assessing the road safety risks of their behaviours is a subject of interest for researchers, governments and transport companies, as nations rely on HGVs for the delivery of goods and services. However, HGV driving is a complex, dynamic, uncertain and multifaceted task, mostly influenced by individual traits and external contextual factors. Advanced computational and artificial intelligence (AI) methods have provided promising solutions to automatically characterise the manner by which drivers operate vehicle controls and assess their impact on road safety. However, several challenges and limitations are faced by the current intelligence-supported driving risk assessment approaches proposed by researchers, such as: (1) the lack of comprehensive driving risk datasets; (2) information about the impact of inevitable contextual factors on HGV drivers' responses is not considered, such as drivers' physical and mental states, weather conditions, traffic conditions, road geometry, road types, and work schedules; (3) ambiguity in the definition of driving behaviours is not considered; and (4) imprecision of AI models, and variability in experts' subjective views are not considered. To overcome the aforementioned challenges and limitations, this multidisciplinary research aims at exploring multiple sources of data including information about the impact of contextual factors captured from crucial stakeholders in the HGV sector to develop a reliable context-aware driving risk assessment framework. To achieve this aim, AI methods are explored to accurately detect drivers' driving styles, affective states and driving postures using telematics data, facial images, and driver posture images respectively. Subsequently, due to the lack of comprehensive driving risk datasets, fuzzy expert systems (FESs) are explored to fuse detected driving behaviours and perceived external factors using knowledge from domain experts. The key findings of this research are: (1) recurrent neural networks are effective in capturing the temporal dynamics and differences between the different types of driver distraction postures and affective states; (2) there is a trade-off between efficiency and privacy in processing facial images using AI approaches; (3) the fusion of driver behaviours and external factors using FESs produces realistic, reliable and fair driving risk assessments; and (4) a hierarchical representation of a decision-making process simplifies reasoning compared to flat representations
    corecore