10,766 research outputs found

    A Review of Model Predictive Controls Applied to Advanced Driver-Assistance Systems

    Get PDF
    Advanced Driver-Assistance Systems (ADASs) are currently gaining particular attention in the automotive field, as enablers for vehicle energy consumption, safety, and comfort enhancement. Compelling evidence is in fact provided by the variety of related studies that are to be found in the literature. Moreover, considering the actual technology readiness, larger opportunities might stem from the combination of ADASs and vehicle connectivity. Nevertheless, the definition of a suitable control system is not often trivial, especially when dealing with multiple-objective problems and dynamics complexity. In this scenario, even though diverse strategies are possible (e.g., Equivalent Consumption Minimization Strategy, Rule-based strategy, etc.), the Model Predictive Control (MPC) turned out to be among the most effective ones in fulfilling the aforementioned tasks. Hence, the proposed study is meant to produce a comprehensive review of MPCs applied to scenarios where ADASs are exploited and aims at providing the guidelines to select the appropriate strategy. More precisely, particular attention is paid to the prediction phase, the objective function formulation and the constraints. Subsequently, the interest is shifted to the combination of ADASs and vehicle connectivity to assess for how such information is handled by the MPC. The main results from the literature are presented and discussed, along with the integration of MPC in the optimal management of higher level connection and automation. Current gaps and challenges are addressed to, so as to possibly provide hints on future developments

    Safe Reinforcement Learning-Based Eco-Driving Control for Mixed Traffic Flows With Disturbances

    Full text link
    This paper presents a safe learning-based eco-driving framework tailored for mixed traffic flows, which aims to optimize energy efficiency while guaranteeing safety during real-system operations. Even though reinforcement learning (RL) is capable of optimizing energy efficiency in intricate environments, it is challenged by safety requirements during the training process. The lack of safety guarantees is the other concern when deploying a trained policy in real-world application. Compared with RL, model predicted control (MPC) can handle constrained dynamics systems, ensuring safe driving. However, the major challenges lie in complicated eco-driving tasks and the presence of disturbances, which respectively challenge the MPC design and the satisfaction of constraints. To address these limitations, the proposed framework incorporates the tube-based enhanced MPC (RMPC) to ensure the safe execution of the RL policy under disturbances, thereby improving the control robustness. RL not only optimizes the energy efficiency of the connected and automated vehicle in mixed traffic but also handles more uncertain scenarios, in which the energy consumption of the human-driven vehicle and its diverse and stochastic driving behaviors are considered in the optimization framework. Simulation results demonstrate that the proposed algorithm, compared with RMPC technique, shows an average improvement of 10.88% in holistic energy efficiency, while compared with RL algorithm, it effectively prevents inter-vehicle collisions
    • …
    corecore