146 research outputs found

    Developing Predictive Models of Driver Behaviour for the Design of Advanced Driving Assistance Systems

    Get PDF
    World-wide injuries in vehicle accidents have been on the rise in recent years, mainly due to driver error. The main objective of this research is to develop a predictive system for driving maneuvers by analyzing the cognitive behavior (cephalo-ocular) and the driving behavior of the driver (how the vehicle is being driven). Advanced Driving Assistance Systems (ADAS) include different driving functions, such as vehicle parking, lane departure warning, blind spot detection, and so on. While much research has been performed on developing automated co-driver systems, little attention has been paid to the fact that the driver plays an important role in driving events. Therefore, it is crucial to monitor events and factors that directly concern the driver. As a goal, we perform a quantitative and qualitative analysis of driver behavior to find its relationship with driver intentionality and driving-related actions. We have designed and developed an instrumented vehicle (RoadLAB) that is able to record several synchronized streams of data, including the surrounding environment of the driver, vehicle functions and driver cephalo-ocular behavior, such as gaze/head information. We subsequently analyze and study the behavior of several drivers to find out if there is a meaningful relation between driver behavior and the next driving maneuver

    Advances in Automated Driving Systems

    Get PDF
    Electrification, automation of vehicle control, digitalization and new mobility are the mega-trends in automotive engineering, and they are strongly connected. While many demonstrations for highly automated vehicles have been made worldwide, many challenges remain in bringing automated vehicles to the market for private and commercial use. The main challenges are as follows: reliable machine perception; accepted standards for vehicle-type approval and homologation; verification and validation of the functional safety, especially at SAE level 3+ systems; legal and ethical implications; acceptance of vehicle automation by occupants and society; interaction between automated and human-controlled vehicles in mixed traffic; human–machine interaction and usability; manipulation, misuse and cyber-security; the system costs of hard- and software and development efforts. This Special Issue was prepared in the years 2021 and 2022 and includes 15 papers with original research related to recent advances in the aforementioned challenges. The topics of this Special Issue cover: Machine perception for SAE L3+ driving automation; Trajectory planning and decision-making in complex traffic situations; X-by-Wire system components; Verification and validation of SAE L3+ systems; Misuse, manipulation and cybersecurity; Human–machine interactions, driver monitoring and driver-intention recognition; Road infrastructure measures for the introduction of SAE L3+ systems; Solutions for interactions between human- and machine-controlled vehicles in mixed traffic

    A Context Aware Classification System for Monitoring Driver’s Distraction Levels

    Get PDF
    Understanding the safety measures regarding developing self-driving futuristic cars is a concern for decision-makers, civil society, consumer groups, and manufacturers. The researchers are trying to thoroughly test and simulate various driving contexts to make these cars fully secure for road users. Including the vehicle’ surroundings offer an ideal way to monitor context-aware situations and incorporate the various hazards. In this regard, different studies have analysed drivers’ behaviour under different case scenarios and scrutinised the external environment to obtain a holistic view of vehicles and the environment. Studies showed that the primary cause of road accidents is driver distraction, and there is a thin line that separates the transition from careless to dangerous. While there has been a significant improvement in advanced driver assistance systems, the current measures neither detect the severity of the distraction levels nor the context-aware, which can aid in preventing accidents. Also, no compact study provides a complete model for transitioning control from the driver to the vehicle when a high degree of distraction is detected. The current study proposes a context-aware severity model to detect safety issues related to driver’s distractions, considering the physiological attributes, the activities, and context-aware situations such as environment and vehicle. Thereby, a novel three-phase Fast Recurrent Convolutional Neural Network (Fast-RCNN) architecture addresses the physiological attributes. Secondly, a novel two-tier FRCNN-LSTM framework is devised to classify the severity of driver distraction. Thirdly, a Dynamic Bayesian Network (DBN) for the prediction of driver distraction. The study further proposes the Multiclass Driver Distraction Risk Assessment (MDDRA) model, which can be adopted in a context-aware driving distraction scenario. Finally, a 3-way hybrid CNN-DBN-LSTM multiclass degree of driver distraction according to severity level is developed. In addition, a Hidden Markov Driver Distraction Severity Model (HMDDSM) for the transitioning of control from the driver to the vehicle when a high degree of distraction is detected. This work tests and evaluates the proposed models using the multi-view TeleFOT naturalistic driving study data and the American University of Cairo dataset (AUCD). The evaluation of the developed models was performed using cross-correlation, hybrid cross-correlations, K-Folds validation. The results show that the technique effectively learns and adopts safety measures related to the severity of driver distraction. In addition, the results also show that while a driver is in a dangerous distraction state, the control can be shifted from driver to vehicle in a systematic manner

    Automated taxiing for unmanned aircraft systems

    Get PDF
    Over the last few years, the concept of civil Unmanned Aircraft System(s) (UAS) has been realised, with small UASs commonly used in industries such as law enforcement, agriculture and mapping. With increased development in other areas, such as logistics and advertisement, the size and range of civil UAS is likely to grow. Taken to the logical conclusion, it is likely that large scale UAS will be operating in civil airspace within the next decade. Although the airborne operations of civil UAS have already gathered much research attention, work is also required to determine how UAS will function when on the ground. Motivated by the assumption that large UAS will share ground facilities with manned aircraft, this thesis describes the preliminary development of an Automated Taxiing System(ATS) for UAS operating at civil aerodromes. To allow the ATS to function on the majority of UAS without the need for additional hardware, a visual sensing approach has been chosen, with the majority of work focusing on monocular image processing techniques. The purpose of the computer vision system is to provide direct sensor data which can be used to validate the vehicle s position, in addition to detecting potential collision risks. As aerospace regulations require the most robust and reliable algorithms for control, any methods which are not fully definable or explainable will not be suitable for real-world use. Therefore, non-deterministic methods and algorithms with hidden components (such as Artificial Neural Network (ANN)) have not been used. Instead, the visual sensing is achieved through a semantic segmentation, with separate segmentation and classification stages. Segmentation is performed using superpixels and reachability clustering to divide the image into single content clusters. Each cluster is then classified using multiple types of image data, probabilistically fused within a Bayesian network. The data set for testing has been provided by BAE Systems, allowing the system to be trained and tested on real-world aerodrome data. The system has demonstrated good performance on this limited dataset, accurately detecting both collision risks and terrain features for use in navigation

    Fuzzy Logic

    Get PDF
    Fuzzy Logic is becoming an essential method of solving problems in all domains. It gives tremendous impact on the design of autonomous intelligent systems. The purpose of this book is to introduce Hybrid Algorithms, Techniques, and Implementations of Fuzzy Logic. The book consists of thirteen chapters highlighting models and principles of fuzzy logic and issues on its techniques and implementations. The intended readers of this book are engineers, researchers, and graduate students interested in fuzzy logic systems

    Machine learning applied to radar data: classiïŹcation and semantic instance segmentation of moving road users

    Get PDF
    Classification and semantic instance segmentation applications are rarely considered for automotive radar sensors. In current implementations, objects have to be tracked over time before some semantic information can be extracted. In this thesis, data from a network of 77 GHz automotive radar sensors is used to construct, train and evaluate machine learning algorithms for the classification of moving road users. The classification step is deliberately performed early in the process chain so that a subsequent tracking algorithm can benefit from this extra information. For this purpose, a large data set with real-world scenarios from about 5 h of driving was recorded and annotated. Given that the point clouds measured by the radar sensors are both sparse and noisy, the proposed methods have to be sensitive to those features that discern the individual classes from each other and at the same time, they have to be robust to outliers and measurement errors. Two groups of applications are considered: classi- fication of clustered data and semantic (instance) segmentation of whole scenes. In the first category, specifically designed density-based clustering algorithms are used to group individual measurements to objects. These objects are then used either as input to a manual feature extraction step or as input to a neural network, which operates directly on the bare input points. Different classifiers are trained and evaluated on these input data. For the algorithms of the second category, the measurements of a whole scene are used as input, so that the clustering step becomes obsolete. A newly designed recurrent neural network for instance segmentation of point clouds is utilized. This approach outperforms all of the other proposed methods and exceeds the baseline score by about ten percentage points. In additional experiments, the performance of human test candidates on the same task is analyzed. This study shows that temporal correlations in the data are of great use for the test candidates, who are nevertheless outrun by the recurrent network

    Drone-based Computer Vision-Enabled Vehicle Dynamic Mobility and Safety Performance Monitoring

    Get PDF
    This report documents the research activities to develop a drone-based computer vision-enabled vehicle dynamic safety performance monitoring in Rural, Isolated, Tribal, or Indigenous (RITI) communities. The acquisition of traffic system information, especially the vehicle speed and trajectory information, is of great significance to the study of the characteristics and management of the traffic system in RITI communities. The traditional method of relying on video analysis to obtain vehicle number and trajectory information has its application scenarios, but the common video source is often a camera fixed on a roadside device. In the videos obtained in this way, vehicles are likely to occlude each other, which seriously affects the accuracy of vehicle detection and the estimation of speed. Although there are methods to obtain high-view road video by means of aircraft and satellites, the corresponding cost will be high. Therefore, considering that drones can obtain high-definition video at a higher viewing angle, and the cost is relatively low, we decided to use drones to obtain road videos to complete vehicle detection. In order to overcome the shortcomings of traditional object detection methods when facing a large number of targets and complex scenes of RITI communities, our proposed method uses convolutional neural network (CNN) technology. We modified the YOLO v3 network structure and used a vehicle data set captured by drones for transfer learning, and finally trained a network that can detect and classify vehicles in videos captured by drones. A self-calibrated road boundary extraction method based on image sequences was used to extract road boundaries and filter vehicles to improve the detection accuracy of cars on the road. Using the results of neural network detection as input, we use video-based object tracking to complete the extraction of vehicle trajectory information for traffic safety improvements. Finally, the number of vehicles, speed and trajectory information of vehicles were calculated, and the average speed and density of the traffic flow were estimated on this basis. By analyzing the acquiesced data, we can estimate the traffic condition of the monitored area to predict possible crashes on the highways

    Methods and techniques for analyzing human factors facets on drivers

    Get PDF
    Mención Internacional en el título de doctorWith millions of cars moving daily, driving is the most performed activity worldwide. Unfortunately, according to the World Health Organization (WHO), every year, around 1.35 million people worldwide die from road traffic accidents and, in addition, between 20 and 50 million people are injured, placing road traffic accidents as the second leading cause of death among people between the ages of 5 and 29. According to WHO, human errors, such as speeding, driving under the influence of drugs, fatigue, or distractions at the wheel, are the underlying cause of most road accidents. Global reports on road safety such as "Road safety in the European Union. Trends, statistics, and main challenges" prepared by the European Commission in 2018 presented a statistical analysis that related road accident mortality rates and periods segmented by hours and days of the week. This report revealed that the highest incidence of mortality occurs regularly in the afternoons during working days, coinciding with the period when the volume of traffic increases and when any human error is much more likely to cause a traffic accident. Accordingly, mitigating human errors in driving is a challenge, and there is currently a growing trend in the proposal for technological solutions intended to integrate driver information into advanced driving systems to improve driver performance and ergonomics. The study of human factors in the field of driving is a multidisciplinary field in which several areas of knowledge converge, among which stand out psychology, physiology, instrumentation, signal treatment, machine learning, the integration of information and communication technologies (ICTs), and the design of human-machine communication interfaces. The main objective of this thesis is to exploit knowledge related to the different facets of human factors in the field of driving. Specific objectives include identifying tasks related to driving, the detection of unfavorable cognitive states in the driver, such as stress, and, transversely, the proposal for an architecture for the integration and coordination of driver monitoring systems with other active safety systems. It should be noted that the specific objectives address the critical aspects in each of the issues to be addressed. Identifying driving-related tasks is one of the primary aspects of the conceptual framework of driver modeling. Identifying maneuvers that a driver performs requires training beforehand a model with examples of each maneuver to be identified. To this end, a methodology was established to form a data set in which a relationship is established between the handling of the driving controls (steering wheel, pedals, gear lever, and turn indicators) and a series of adequately identified maneuvers. This methodology consisted of designing different driving scenarios in a realistic driving simulator for each type of maneuver, including stop, overtaking, turns, and specific maneuvers such as U-turn and three-point turn. From the perspective of detecting unfavorable cognitive states in the driver, stress can damage cognitive faculties, causing failures in the decision-making process. Physiological signals such as measurements derived from the heart rhythm or the change of electrical properties of the skin are reliable indicators when assessing whether a person is going through an episode of acute stress. However, the detection of stress patterns is still an open problem. Despite advances in sensor design for the non-invasive collection of physiological signals, certain factors prevent reaching models capable of detecting stress patterns in any subject. This thesis addresses two aspects of stress detection: the collection of physiological values during stress elicitation through laboratory techniques such as the Stroop effect and driving tests; and the detection of stress by designing a process flow based on unsupervised learning techniques, delving into the problems associated with the variability of intra- and inter-individual physiological measures that prevent the achievement of generalist models. Finally, in addition to developing models that address the different aspects of monitoring, the orchestration of monitoring systems and active safety systems is a transversal and essential aspect in improving safety, ergonomics, and driving experience. Both from the perspective of integration into test platforms and integration into final systems, the problem of deploying multiple active safety systems lies in the adoption of monolithic models where the system-specific functionality is run in isolation, without considering aspects such as cooperation and interoperability with other safety systems. This thesis addresses the problem of the development of more complex systems where monitoring systems condition the operability of multiple active safety systems. To this end, a mediation architecture is proposed to coordinate the reception and delivery of data flows generated by the various systems involved, including external sensors (lasers, external cameras), cabin sensors (cameras, smartwatches), detection models, deliberative models, delivery systems and machine-human communication interfaces. Ontology-based data modeling plays a crucial role in structuring all this information and consolidating the semantic representation of the driving scene, thus allowing the development of models based on data fusion.I would like to thank the Ministry of Economy and Competitiveness for granting me the predoctoral fellowship BES-2016-078143 corresponding to the project TRA2015-63708-R, which provided me the opportunity of conducting all my Ph. D activities, including completing an international internship.Programa de Doctorado en Ciencia y Tecnología Informåtica por la Universidad Carlos III de MadridPresidente: José María Armingol Moreno.- Secretario: Felipe Jiménez Alonso.- Vocal: Luis Mart

    Drone-Based Computer Vision-Enabled Vehicle Dynamic Mobility and Safety Performance Monitoring

    Get PDF
    This report documents the research activities to develop a drone-based computer vision-enabled vehicle dynamic safety performance monitoring in Rural, Isolated, Tribal, or Indigenous (RITI) communities. The acquisition of traffic system information, especially the vehicle speed and trajectory information, is of great significance to the study of the characteristics and management of the traffic system in RITI communities. The traditional method of relying on video analysis to obtain vehicle number and trajectory information has its application scenarios, but the common video source is often a camera fixed on a roadside device. In the videos obtained in this way, vehicles are likely to occlude each other, which seriously affects the accuracy of vehicle detection and the estimation of speed. Although there are methods to obtain high-view road video by means of aircraft and satellites, the corresponding cost will be high. Therefore, considering that drones can obtain high-definition video at a higher viewing angle, and the cost is relatively low, we decided to use drones to obtain road videos to complete vehicle detection. In order to overcome the shortcomings of traditional object detection methods when facing a large number of targets and complex scenes of RITI communities, our proposed method uses convolutional neural network (CNN) technology. We modified the YOLO v3 network structure and used a vehicle data set captured by drones for transfer learning, and finally trained a network that can detect and classify vehicles in videos captured by drones. A self-calibrated road boundary extraction method based on image sequences was used to extract road boundaries and filter vehicles to improve the detection accuracy of cars on the road. Using the results of neural network detection as input, we use video-based object tracking to complete the extraction of vehicle trajectory information for traffic safety improvements. Finally, the number of vehicles, speed and trajectory information of vehicles were calculated, and the average speed and density of the traffic flow were estimated on this basis. By analyzing the acquiesced data, we can estimate the traffic condition of the monitored area to predict possible crashes on the highways

    Classification of time series patterns from complex dynamic systems

    Full text link
    • 

    corecore