218 research outputs found

    Eco-driving for Electric Connected Vehicles at Signalized Intersections: A Parameterized Reinforcement Learning approach

    Full text link
    This paper proposes an eco-driving framework for electric connected vehicles (CVs) based on reinforcement learning (RL) to improve vehicle energy efficiency at signalized intersections. The vehicle agent is specified by integrating the model-based car-following policy, lane-changing policy, and the RL policy, to ensure safe operation of a CV. Subsequently, a Markov Decision Process (MDP) is formulated, which enables the vehicle to perform longitudinal control and lateral decisions, jointly optimizing the car-following and lane-changing behaviors of the CVs in the vicinity of intersections. Then, the hybrid action space is parameterized as a hierarchical structure and thereby trains the agents with two-dimensional motion patterns in a dynamic traffic environment. Finally, our proposed methods are evaluated in SUMO software from both a single-vehicle-based perspective and a flow-based perspective. The results show that our strategy can significantly reduce energy consumption by learning proper action schemes without any interruption of other human-driven vehicles (HDVs)

    Distributed Control and Learning of Connected and Autonomous Vehicles Approaching and Departing Signalized Intersections

    Get PDF
    This thesis outlines methods for achieving energy-optimal control policies for autonomous vehicles approaching and departing a signalized traffic intersection. Connected and autonomous vehicle technology has gained wide interest from both research institutions and government agencies because it offers immense promise in advancing efficient energy usage and abating hazards that beset the current transportation system. Energy minimization is itself crucial in reducing the greenhouse emissions from fossil-fuel-powered vehicles and extending the battery life of electric vehicles which are presently the major alternative to fossil-fuel-powered vehicles. Two major forms of fuel minimization are studied. First, the eco-driving problem is solved for a vehicle approaching a traffic signal intersection using the deep reinforcement learning approach. The task is to find the optimal control input to the vehicle approaching a signalized intersection given the traffic signal pattern. It is assumed that the vehicle is made aware of the traffic signal through vehicle-to-vehicle and vehicle-to-infrastructure communication. A microscopic fuel-consumption model is considered. The system model, system constraints, and fuel consumption model are translated to the reinforcement learning framework. The model is then trained and simulations are presented. Practical deployment considerations are also discussed. Next, the multi-agent vehicle platooning control is considered. Vehicle platooning exploits the aerodynamics of vehicles that follow each other closely in a line to reduce the total energy consumption of the vehicle fleet. Graph-theoretic methods that characterize the interaction of multi-agents are studied using matrix-weighted graphs. Particularly, the roles of the matrix weight elements in matrix-weighted consensus are examined and the results are demonstrated on a network of three agents. The results are applied in vehicle platoon splitting and merging for a vehicle approaching a traffic stop

    Machine Learning Tools for Optimization of Fuel Consumption at Signalized Intersections in Connected/Automated Vehicles Environment

    Get PDF
    Researchers continue to seek numerous techniques for making the transportation sector more sustainable in terms of fuel consumption and greenhouse gas emissions. Among the most effective techniques is Eco-driving at signalized intersections. Eco-driving is a complex control problem where drivers approaching the intersections are guided, over a period of time, to optimize fuel consumption. Eco-driving control systems reduce fuel consumption by optimizing vehicle trajectories near signalized intersections based on information of the SpaT (Signal Phase and Timing). Developing Eco-driving applications for semi-actuated signals, unlike pre-timed, is more challenging due to variations in cycle length resulting from fluctuations in traffic demand. Reinforcement learning (RL) is a machine learning paradigm that mimics the human learning behavior where an agent attempts to solve a given control problem by interacting with the environment and developing an optimal policy. Unlike the methods implemented in previous studies for solving the Eco-driving problem, RL does not necessitate prior knowledge of the environment being learned and processed. Therefore, the aim of this study is twofold: (1) Develop a novel brute force Eco-driving algorithm (ECO-SEMI-Q) for CAV (Connected/Autonomous Vehicles) passing through semi-actuated signalized intersections; and (2) Develop a novel Deep Reinforcement Learning (DRL) Eco-driving algorithm for CAV passing through fixed-time signalized intersections. The developed algorithms are tested at both microscopic and macroscopic levels. For the microscopic level, results indicate that the fuel consumption for vehicles controlled by the ECO-SEMI-Q and DRL models is 29.2% and 23% less than that for the case with no control, respectively. For the macroscopic level, a sensitivity analysis for the impact of MPR (Market Penetration Rate) shows that the savings in fuel consumption increase with higher MPR. Furthermore, when MPR is greater than 50%, the ECO-SEMI-Q algorithm provides appreciable savings in travel times. The sensitivity analysis indicates savings in the network fuel consumption when the MPR of the DRL algorithm is higher than 35%. At MPR less than 35%, the DRL algorithm has an adverse impact on fuel consumption due to aggressive lane change and passing maneuvers. These reductions in fuel consumption demonstrate the ability of the algorithms to provide more environmentally sustainable signalized intersections

    Traffic-Aware Ecological Cruising Control for Connected Electric Vehicle

    Get PDF
    The advent of intelligent connected technology has greatly enriched the capabilities of vehicles in acquiring information. The integration of short-term information from limited sensing range and long-term information from cloud-based systems in vehicle motion planning and control has become a vital means to deeply explore the energy-saving potential of vehicles. In this study, a traffic-aware ecological cruising control (T-ECC) strategy based on a hierarchical framework for connected electric vehicles in uncertain traffic environments is proposed, leveraging the two distinct temporal-dimension information. In the upper layer that is dedicated for speed planning, a sustainable energy consumption strategy (SECS) is introduced for the first time. It finds the optimal economic speed by converting variations in kinetic energy into equivalent battery energy consumption based on long-term road information. In the lower layer, a synthetic rolling-horizon optimization control (SROC) is developed to handle real-time traffic uncertainties. This control approach jointly optimizes energy efficiency, battery life, driving safety, and comfort for vehicles under dynamically changing traffic conditions. Notably, a stochastic preceding vehicle model is presented to effectively capture the uncertainties in traffic during the driving process. Finally, the proposed T-ECC is validated through simulations in both virtual and real-world driving conditions. Results demonstrate that the proposed strategy significantly improves the energy efficiency of the vehicle

    COOR-PLT: A hierarchical control model for coordinating adaptive platoons of connected and autonomous vehicles at signal-free intersections based on deep reinforcement learning

    Full text link
    Platooning and coordination are two implementation strategies that are frequently proposed for traffic control of connected and autonomous vehicles (CAVs) at signal-free intersections instead of using conventional traffic signals. However, few studies have attempted to integrate both strategies to better facilitate the CAV control at signal-free intersections. To this end, this study proposes a hierarchical control model, named COOR-PLT, to coordinate adaptive CAV platoons at a signal-free intersection based on deep reinforcement learning (DRL). COOR-PLT has a two-layer framework. The first layer uses a centralized control strategy to form adaptive platoons. The optimal size of each platoon is determined by considering multiple objectives (i.e., efficiency, fairness and energy saving). The second layer employs a decentralized control strategy to coordinate multiple platoons passing through the intersection. Each platoon is labeled with coordinated status or independent status, upon which its passing priority is determined. As an efficient DRL algorithm, Deep Q-network (DQN) is adopted to determine platoon sizes and passing priorities respectively in the two layers. The model is validated and examined on the simulator Simulation of Urban Mobility (SUMO). The simulation results demonstrate that the model is able to: (1) achieve satisfactory convergence performances; (2) adaptively determine platoon size in response to varying traffic conditions; and (3) completely avoid deadlocks at the intersection. By comparison with other control methods, the model manifests its superiority of adopting adaptive platooning and DRL-based coordination strategies. Also, the model outperforms several state-of-the-art methods on reducing travel time and fuel consumption in different traffic conditions.Comment: This paper has been submitted to Transportation Research Part C: Emerging Technologies and is currently under revie

    Enhanced Eco-Approach Control of Connected Electric Vehicles at Signalized Intersection with Queue Discharge Prediction

    Get PDF
    Long queues of vehicles are often found at signalized intersections, which increases the energy consumption of all the vehicles involved. This paper proposes an enhanced eco-approach control (EEAC) strategy with consideration of the queue ahead for connected electric vehicles (EVs) at a signalized intersection. The discharge movement of the vehicle queue is predicted by an improved queue discharge prediction method (IQDP), which takes both vehicle and driver dynamics into account. Based on the prediction of the queue, the EEAC strategy is designed with a hierarchical framework: the upper-stage uses dynamic programming to find the general trend of the energy-efficient speed profile, which is followed by the lower-stage model predictive controller to computes the explicit solution for a short horizon with guaranteed safe inter-vehicular distance. Finally, numerical simulations are conducted to demonstrate the energy efficiency improvement of the EEAC strategy. Besides, the effects of the queue prediction accuracy on the performance of the EEAC strategy are also investigated
    corecore