99 research outputs found

    Driver lane change intention inference for intelligent vehicles: framework, survey, and challenges

    Get PDF
    Intelligent vehicles and advanced driver assistance systems (ADAS) need to have proper awareness of the traffic context as well as the driver status since ADAS share the vehicle control authorities with the human driver. This study provides an overview of the ego-vehicle driver intention inference (DII), which mainly focus on the lane change intention on highways. First, a human intention mechanism is discussed in the beginning to gain an overall understanding of the driver intention. Next, the ego-vehicle driver intention is classified into different categories based on various criteria. A complete DII system can be separated into different modules, which consists of traffic context awareness, driver states monitoring, and the vehicle dynamic measurement module. The relationship between these modules and the corresponding impacts on the DII are analyzed. Then, the lane change intention inference (LCII) system is reviewed from the perspective of input signals, algorithms, and evaluation. Finally, future concerns and emerging trends in this area are highlighted

    Reinforcement Learning and Advanced Reinforcement Learning to Improve Autonomous Vehicle Planning

    Get PDF
    Planning for autonomous vehicles is a challenging process that involves navigating through dynamic and unpredictable surroundings while making judgments in real-time. Traditional planning methods sometimes rely on predetermined rules or customized heuristics, which could not generalize well to various driving conditions. In this article, we provide a unique framework to enhance autonomous vehicle planning by fusing conventional RL methods with cutting-edge reinforcement learning techniques. To handle many elements of planning issues, our system integrates cutting-edge algorithms including deep reinforcement learning, hierarchical reinforcement learning, and meta-learning. Our framework helps autonomous vehicles make decisions that are more reliable and effective by utilizing the advantages of these cutting-edge strategies.With the use of the RLTT technique, an autonomous vehicle can learn about the intentions and preferences of human drivers by inferring the underlying reward function from expert behaviour that has been seen. The autonomous car can make safer and more human-like decisions by learning from expert demonstrations about the fundamental goals and limitations of driving. Large-scale simulations and practical experiments can be carried out to gauge the effectiveness of the suggested approach. On the basis of parameters like safety, effectiveness, and human likeness, the autonomous vehicle planning system's performance can be assessed. The outcomes of these assessments can help to inform future developments and offer insightful information about the strengths and weaknesses of the strategy

    Adapting Regenerative Braking Strength to Driver Preference

    Get PDF
    The modern automotive industry has witnessed a growing emphasis on adapting the driving experience to individual drivers. With the rising popularity of electrified vehicles, the implementation of regenerative braking systems, specifically lift-off regenerative braking, has become a focal point. However, research indicates that drivers often find the predefined deceleration response during lift-off regenerative braking to be undesirable. This thesis addresses this issue by developing an adaptive regenerative braking controller that learns driver preferences, thereby fulfilling the objective of enhancing the driving experience of lift-off regenerative braking systems by reducing driver fatigue through the minimization of pedal interventions. The research focuses on three critical aspects: accurate identification of driving conditions, acquisition of driver preferences for lift-off regenerative braking, and compatibility with real-time automotive hardware. By leveraging advanced techniques like HDBSCAN clustering, fuzzy logic inference, and online Q-learning, the research achieves accurate driving condition identification and adaptation to individual driver preferences in a control scheme that can be practically deployed in-vehicle. Real-world testing demonstrates the controller's 80.9 % accuracy in identifying driving conditions as well as its successful learning of the driver's preferred deceleration to within 1.9 %. Subsequently, the adaptive regenerative braking controller results in a 23.2 % reduction in pedal interventions during deceleration compared to a baseline that is representative of an industry-standard implementation of lift-off regenerative braking. This outcome underscores the controller's potential to alleviate driver fatigue and enhance the overall driving experience. This research contributes to the advancement of electrified vehicle powertrain control, focusing on improving driver acceptance and satisfaction with regenerative braking systems

    On the Road with GPT-4V(ision): Early Explorations of Visual-Language Model on Autonomous Driving

    Full text link
    The pursuit of autonomous driving technology hinges on the sophisticated integration of perception, decision-making, and control systems. Traditional approaches, both data-driven and rule-based, have been hindered by their inability to grasp the nuance of complex driving environments and the intentions of other road users. This has been a significant bottleneck, particularly in the development of common sense reasoning and nuanced scene understanding necessary for safe and reliable autonomous driving. The advent of Visual Language Models (VLM) represents a novel frontier in realizing fully autonomous vehicle driving. This report provides an exhaustive evaluation of the latest state-of-the-art VLM, GPT-4V(ision), and its application in autonomous driving scenarios. We explore the model's abilities to understand and reason about driving scenes, make decisions, and ultimately act in the capacity of a driver. Our comprehensive tests span from basic scene recognition to complex causal reasoning and real-time decision-making under varying conditions. Our findings reveal that GPT-4V demonstrates superior performance in scene understanding and causal reasoning compared to existing autonomous systems. It showcases the potential to handle out-of-distribution scenarios, recognize intentions, and make informed decisions in real driving contexts. However, challenges remain, particularly in direction discernment, traffic light recognition, vision grounding, and spatial reasoning tasks. These limitations underscore the need for further research and development. Project is now available on GitHub for interested parties to access and utilize: \url{https://github.com/PJLab-ADG/GPT4V-AD-Exploration

    Human Automotive Interaction: Affect Recognition for Motor Trend Magazine\u27s Best Driver Car of the Year

    Get PDF
    Observation analysis of vehicle operators has the potential to address the growing trend of motor vehicle accidents. Methods are needed to automatically detect heavy cognitive load and distraction to warn drivers in poor psychophysiological state. Existing methods to monitor a driver have included prediction from steering behavior, smart phone warning systems, gaze detection, and electroencephalogram. We build upon these approaches by detecting cues that indicate inattention and stress from video. The system is tested and developed on data from Motor Trend Magazine\u27s Best Driver Car of the Year 2014 and 2015. It was found that face detection and facial feature encoding posed the most difficult challenges to automatic facial emotion recognition in practice. The chapter focuses on two important parts of the facial emotion recognition pipeline: (1) face detection and (2) facial appearance features. We propose a face detector that unifies state‐of‐the‐art approaches and provides quality control for face detection results, called reference‐based face detection. We also propose a novel method for facial feature extraction that compactly encodes the spatiotemporal behavior of the face and removes background texture, called local anisotropic‐inhibited binary patterns in three orthogonal planes. Real‐world results show promise for the automatic observation of driver inattention and stress

    A Context Aware Classification System for Monitoring Driver’s Distraction Levels

    Get PDF
    Understanding the safety measures regarding developing self-driving futuristic cars is a concern for decision-makers, civil society, consumer groups, and manufacturers. The researchers are trying to thoroughly test and simulate various driving contexts to make these cars fully secure for road users. Including the vehicle’ surroundings offer an ideal way to monitor context-aware situations and incorporate the various hazards. In this regard, different studies have analysed drivers’ behaviour under different case scenarios and scrutinised the external environment to obtain a holistic view of vehicles and the environment. Studies showed that the primary cause of road accidents is driver distraction, and there is a thin line that separates the transition from careless to dangerous. While there has been a significant improvement in advanced driver assistance systems, the current measures neither detect the severity of the distraction levels nor the context-aware, which can aid in preventing accidents. Also, no compact study provides a complete model for transitioning control from the driver to the vehicle when a high degree of distraction is detected. The current study proposes a context-aware severity model to detect safety issues related to driver’s distractions, considering the physiological attributes, the activities, and context-aware situations such as environment and vehicle. Thereby, a novel three-phase Fast Recurrent Convolutional Neural Network (Fast-RCNN) architecture addresses the physiological attributes. Secondly, a novel two-tier FRCNN-LSTM framework is devised to classify the severity of driver distraction. Thirdly, a Dynamic Bayesian Network (DBN) for the prediction of driver distraction. The study further proposes the Multiclass Driver Distraction Risk Assessment (MDDRA) model, which can be adopted in a context-aware driving distraction scenario. Finally, a 3-way hybrid CNN-DBN-LSTM multiclass degree of driver distraction according to severity level is developed. In addition, a Hidden Markov Driver Distraction Severity Model (HMDDSM) for the transitioning of control from the driver to the vehicle when a high degree of distraction is detected. This work tests and evaluates the proposed models using the multi-view TeleFOT naturalistic driving study data and the American University of Cairo dataset (AUCD). The evaluation of the developed models was performed using cross-correlation, hybrid cross-correlations, K-Folds validation. The results show that the technique effectively learns and adopts safety measures related to the severity of driver distraction. In addition, the results also show that while a driver is in a dangerous distraction state, the control can be shifted from driver to vehicle in a systematic manner

    Towards Learning Feasible Hierarchical Decision-Making Policies in Urban Autonomous Driving

    Get PDF
    Modern learning-based algorithms, powered by advanced deep structured neural nets, have multifacetedly facilitated automated driving platforms, spanning from scene characterization and perception to low-level control and state estimation schemes. Nonetheless, urban autonomous driving is regarded as a challenging application for machine learning (ML) and artificial intelligence (AI) since the learnt driving policies must handle complex multi-agent driving scenarios with indeterministic intentions of road participants. In the case of unsignalized intersections, automating the decision-making process at these safety-critical environments entails comprehending numerous layers of abstractions associated with learning robust driving behaviors to allow the vehicle to drive safely and efficiently. Based on our in-depth investigation, we discern that an efficient, yet safe, decision-making scheme for navigating real-world unsignalized intersections does not exist yet. The state-of-the-art schemes lacked practicality to handle real-life complex scenarios as they utilize Low-fidelity vehicle dynamic models which makes them incapable of simulating the real dynamic motion in real-life driving applications. In addition, the conservative behavior of autonomous vehicles, which often overreact to threats which have low likelihood, degrades the overall driving quality and jeopardizes safety. Hence, enhancing driving behavior is essential to attain agile, yet safe, traversing maneuvers in such multi-agent environments. Therefore, the main goal of conducting this PhD research is to develop high-fidelity learning-based frameworks to enhance the autonomous decision-making process at these safety-critical environments. We focus this PhD dissertation on three correlated and complementary research challenges. In our first research challenge, we conduct an in-depth and comprehensive survey on the state-of-the-art learning-based decision-making schemes with the objective of identifying the main shortcomings and potential research avenues. Based on the research directions concluded, we propose, in Problem II and Problem III, novel learning-based frameworks with the objective of enhancing safety and efficiency at different decision-making levels. In Problem II, we develop a novel sensor-independent state estimation for a safety-critical system in urban driving using deep learning techniques. A neural inference model is developed and trained via deep-learning training techniques to obtain accurate state estimates using indirect measurements of vehicle dynamic states and powertrain states. In Problem III, we propose a novel hierarchical reinforcement learning-based decision-making architecture for learning left-turn policies at four-way unsignalized intersections with feasibility guarantees. The proposed technique involves an integration of two main decision-making layers; a high-level learning-based behavioral planning layer which adopts soft actor-critic principles to learn high-level, non-conservative yet safe, driving behaviors, and a motion planning layer that uses low-level Model Predictive Control (MPC) principles to ensure feasibility of the two-dimensional left-turn maneuver. The high-level layer generates reference signals of velocity and yaw angle for the ego vehicle taking into account safety and collision avoidance with the intersection vehicles, whereas the low-level planning layer solves an optimization problem to track these reference commands considering several vehicle dynamic constraints and ride comfort

    International overview on the legal framework for highly automated vehicles

    Get PDF
    The evolution of Autonomous and automated technologies during the last decades has been constant and maintained. All of us can remember an old film, in which they shown us a driverless car, and we thought it was just an unreal object born of filmmakers imagination. However, nowadays Highly Automated Vehicles are a reality, even not in our daily lives. Hardly a day we don’t have news about Tesla launching a new model or Google showing the new features of their autonomous car. But don’t have to travel far away from our borders. Here in Europe we also can find different companies trying, with more or less success depending on with, not to be lagged behind in this race. But today their biggest problem is not only the liability of their innovative technology, but also the legal framework for Highly Automated Vehicles. As a quick summary, in only a few countries they have testing licenses, which not allow them to freely drive, and to the contrary most nearly ban their use. The next milestone in autonomous driving is to build and homogeneous, safe and global legal framework. With this in mind, this paper presents an international overview on the legal framework for Highly Automated Vehicles. We also present de different issues that such technologies have to face to and which they have to overcome in the next years to be a real and daily technology
    corecore