1,593 research outputs found

    Tackling Occlusions & Limited Sensor Range with Set-based Safety Verification

    Full text link
    Provable safety is one of the most critical challenges in automated driving. The behavior of numerous traffic participants in a scene cannot be predicted reliably due to complex interdependencies and the indiscriminate behavior of humans. Additionally, we face high uncertainties and only incomplete environment knowledge. Recent approaches minimize risk with probabilistic and machine learning methods - even under occlusions. These generate comfortable behavior with good traffic flow, but cannot guarantee safety of their maneuvers. Therefore, we contribute a safety verification method for trajectories under occlusions. The field-of-view of the ego vehicle and a map are used to identify critical sensing field edges, each representing a potentially hidden obstacle. The state of occluded obstacles is unknown, but can be over-approximated by intervals over all possible states. Then set-based methods are extended to provide occupancy predictions for obstacles with state intervals. The proposed method can verify the safety of given trajectories (e.g. if they ensure collision-free fail-safe maneuver options) w.r.t. arbitrary safe-state formulations. The potential for provably safe trajectory planning is shown in three evaluative scenarios

    ๋„์‹ฌ ๊ต์ฐจ๋กœ์—์„œ์˜ ์ž์œจ์ฃผํ–‰์„ ์œ„ํ•œ ์ฃผ๋ณ€ ์ฐจ๋Ÿ‰ ๊ฒฝ๋กœ ์˜ˆ์ธก ๋ฐ ๊ฑฐ๋™ ๊ณ„ํš ์•Œ๊ณ ๋ฆฌ์ฆ˜

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ)--์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› :๊ณต๊ณผ๋Œ€ํ•™ ๊ธฐ๊ณ„ํ•ญ๊ณต๊ณตํ•™๋ถ€,2020. 2. ์ด๊ฒฝ์ˆ˜.์ฐจ๋ž‘์šฉ ์„ผ์‹ฑ ๋ฐ ์ฒ˜๋ฆฌ๊ธฐ์ˆ ์ด ๋ฐœ๋‹ฌํ•จ์— ๋”ฐ๋ผ ์ž๋™์ฐจ ๊ธฐ์ˆ  ์—ฐ๊ตฌ๊ฐ€ ์ˆ˜๋™ ์•ˆ์ „ ๊ธฐ์ˆ ์—์„œ ๋Šฅ๋™ ์•ˆ์ „ ๊ธฐ์ˆ ๋กœ ์ดˆ์ ์ด ํ™•์žฅ๋˜๊ณ  ์žˆ๋‹ค. ์ตœ๊ทผ, ์ฃผ์š” ์ž๋™์ฐจ ์ œ์ž‘์‚ฌ๋“ค์€ ๋Šฅ๋™ํ˜• ์ฐจ๊ฐ„๊ฑฐ๋ฆฌ ์ œ์–ด, ์ฐจ์„  ์œ ์ง€ ๋ณด์กฐ, ๊ทธ๋ฆฌ๊ณ  ๊ธด๊ธ‰ ์ž๋™ ์ œ๋™๊ณผ ๊ฐ™์€ ๋Šฅ๋™ ์•ˆ์ „ ๊ธฐ์ˆ ์ด ์ด๋ฏธ ์ƒ์—…ํ™”ํ•˜๊ณ  ์žˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ธฐ์ˆ ์  ์ง„๋ณด๋Š” ์‚ฌ์ƒ๋ฅ  ์ œ๋กœ๋ฅผ ๋‹ฌ์„ฑํ•˜๊ธฐ ์œ„ํ•˜์—ฌ ๊ธฐ์ˆ  ์—ฐ๊ตฌ ๋ถ„์•ผ๋ฅผ ๋Šฅ๋™ ์•ˆ์ „ ๊ธฐ์ˆ ์„ ๋„˜์–ด์„œ ์ž์œจ์ฃผํ–‰ ์‹œ์Šคํ…œ์œผ๋กœ ํ™•์žฅ์‹œํ‚ค๊ณ  ์žˆ๋‹ค. ํŠนํžˆ, ๋„์‹ฌ ๋„๋กœ๋Š” ์ธ๋„, ์‚ฌ๊ฐ์ง€๋Œ€, ์ฃผ์ฐจ์ฐจ๋Ÿ‰, ์ด๋ฅœ์ฐจ, ๋ณดํ–‰์ž ๋“ฑ๊ณผ ๊ฐ™์€ ๊ตํ†ต ์œ„ํ—˜ ์š”์†Œ๋ฅผ ๋งŽ์ด ๊ฐ–๊ณ  ์žˆ๊ธฐ ๋•Œ๋ฌธ์— ๊ณ ์†๋„๋กœ๋ณด๋‹ค ์‚ฌ๊ณ  ๋ฐœ์ƒ๋ฅ ๊ณผ ์‚ฌ์ƒ๋ฅ ์ด ๋†’์œผ๋ฉฐ, ์ด๋Š” ๋„์‹ฌ ๋„๋กœ์—์„œ์˜ ์ž์œจ์ฃผํ–‰์€ ํ•ต์‹ฌ ์ด์Šˆ๊ฐ€ ๋˜๊ณ  ์žˆ๋‹ค. ๋งŽ์€ ํ”„๋กœ์ ํŠธ๋“ค์ด ์ž์œจ์ฃผํ–‰์˜ ํ™˜๊ฒฝ์ , ์ธ๊ตฌํ•™์ , ์‚ฌํšŒ์ , ๊ทธ๋ฆฌ๊ณ  ๊ฒฝ์ œ์  ์ธก๋ฉด์—์„œ์˜ ์ž์œจ์ฃผํ–‰์˜ ํšจ๊ณผ๋ฅผ ํ‰๊ฐ€ํ•˜๊ธฐ ์œ„ํ•ด ์ˆ˜ํ–‰๋˜์—ˆ๊ฑฐ๋‚˜ ์ˆ˜ํ–‰ ์ค‘์— ์žˆ๋‹ค. ์˜ˆ๋ฅผ ๋“ค์–ด, ์œ ๋Ÿฝ์˜ AdaptIVE๋Š” ๋‹ค์–‘ํ•œ ์ž์œจ์ฃผํ–‰ ๊ธฐ๋Šฅ์„ ๊ฐœ๋ฐœํ•˜์˜€์œผ๋ฉฐ, ๊ตฌ์ฒด์ ์ธ ํ‰๊ฐ€ ๋ฐฉ๋ฒ•๋ก ์„ ๊ฐœ๋ฐœํ•˜์˜€๋‹ค. ๋˜ํ•œ, CityMobil2๋Š” ์œ ๋Ÿฝ ์ „์—ญ์˜ 9๊ฐœ์˜ ๋‹ค๋ฅธ ํ™˜๊ฒฝ์—์„œ ๋ฌด์ธ ์ง€๋Šฅํ˜• ์ฐจ๋Ÿ‰์„ ์„ฑ๊ณต์ ์œผ๋กœ ํ†ตํ•ฉํ•˜์˜€๋‹ค. ์ผ๋ณธ์—์„œ๋Š” 2014๋…„ 5์›”์— ์‹œ์ž‘๋œ Automated Driving System Research Project๋Š” ์ž์œจ์ฃผํ–‰ ์‹œ์Šคํ…œ๊ณผ ์ฐจ์„ธ๋Œ€ ๋„์‹ฌ ๊ตํ†ต ์ˆ˜๋‹จ์˜ ๊ฐœ๋ฐœ ๋ฐ ๊ฒ€์ฆ์— ์ดˆ์ ์„ ๋งž์ถ”์—ˆ๋‹ค. ๊ธฐ์กด ์—ฐ๊ตฌ๋“ค์— ๋Œ€ํ•œ ์กฐ์‚ฌ๋ฅผ ํ†ตํ•ด ์ž์œจ์ฃผํ–‰ ์‹œ์Šคํ…œ์€ ๊ตํ†ต ์ฐธ์—ฌ์ž๋“ค์˜ ์•ˆ์ „๋„๋ฅผ ํ–ฅ์ƒ์‹œํ‚ค๊ณ , ๊ตํ†ต ํ˜ผ์žก์„ ๊ฐ์†Œ์‹œํ‚ค๋ฉฐ, ์šด์ „์ž ํŽธ์˜์„ฑ์„ ์ฆ์ง„์‹œํ‚ค๋Š” ๊ฒƒ์ด ์ฆ๋ช…๋˜์—ˆ๋‹ค. ๋‹ค์–‘ํ•œ ๋ฐฉ๋ฒ•๋ก ๋“ค์ด ์ธ์ง€, ๊ฑฐ๋™ ๊ณ„ํš, ๊ทธ๋ฆฌ๊ณ  ์ œ์–ด์™€ ๊ฐ™์€ ๋„์‹ฌ ๋„๋กœ ์ž์œจ์ฃผํ–‰์ฐจ์˜ ํ•ต์‹ฌ ๊ธฐ์ˆ ๋“ค์„ ๊ฐœ๋ฐœํ•˜๊ธฐ ์œ„ํ•˜์—ฌ ์‚ฌ์šฉ๋˜์—ˆ๋‹ค. ํ•˜์ง€๋งŒ ๋งŽ์€ ์ตœ์‹ ์˜ ์ž์œจ์ฃผํ–‰ ์—ฐ๊ตฌ๋“ค์€ ๊ฐ ๊ธฐ์ˆ ์˜ ๊ฐœ๋ฐœ์„ ๋ณ„๊ฐœ๋กœ ๊ณ ๋ คํ•˜์—ฌ ์ง„ํ–‰ํ•ด์™”๋‹ค. ๊ฒฐ๊ณผ์ ์œผ๋กœ ํ†ตํ•ฉ์ ์ธ ๊ด€์ ์—์„œ์˜ ์ž์œจ์ฃผํ–‰ ๊ธฐ์ˆ  ์„ค๊ณ„๋Š” ์•„์ง ์ถฉ๋ถ„ํžˆ ๊ณ ๋ ค๋˜์–ด ์•Š์•˜๋‹ค. ๋”ฐ๋ผ์„œ, ๋ณธ ๋…ผ๋ฌธ์€ ๋ณต์žกํ•œ ๋„์‹ฌ ๋„๋กœ ํ™˜๊ฒฝ์—์„œ ๋ผ์ด๋‹ค, ์นด๋ฉ”๋ผ, GPS, ๊ทธ๋ฆฌ๊ณ  ๊ฐ„๋‹จํ•œ ๊ฒฝ๋กœ ๋งต์— ๊ธฐ๋ฐ˜ํ•œ ์™„์ „ ์ž์œจ์ฃผํ–‰ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๊ฐœ๋ฐœํ•˜๋Š” ๊ฒƒ์„ ๋ชฉํ‘œ๋กœ ํ•˜์˜€๋‹ค. ์ œ์•ˆ๋œ ์ž์œจ์ฃผํ–‰ ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ๋น„ํ†ต์ œ ๊ต์ฐจ๋กœ๋ฅผ ํฌํ•จํ•œ ๋„์‹ฌ ๋„๋กœ ์ƒํ™ฉ์„ ์ฐจ๋Ÿ‰ ๊ฑฐ๋™ ์˜ˆ์ธก๊ธฐ์™€ ๋ชจ๋ธ ์˜ˆ์ธก ์ œ์–ด ๊ธฐ๋ฒ•์— ๊ธฐ๋ฐ˜ํ•˜์—ฌ ์„ค๊ณ„๋˜์—ˆ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์€ ๋™์ , ์ •์  ํ™˜๊ฒฝ ํ‘œํ˜„ ๋ฐ ์ข…ํšก๋ฐฉํ–ฅ ๊ฑฐ๋™ ๊ณ„ํš์„ ์ค‘์ ์ ์œผ๋กœ ๋‹ค๋ฃจ์—ˆ๋‹ค. ๋ณธ ๋…ผ๋ฌธ์€ ๋„์‹ฌ ๋„๋กœ ์ž์œจ์ฃผํ–‰์„ ์œ„ํ•œ ๊ฑฐ๋™ ๊ณ„ํš ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ๊ฐœ์š”๋ฅผ ์ œ์‹œํ•˜์˜€์œผ๋ฉฐ, ์‹ค์ œ ๊ตํ†ต ์ƒํ™ฉ์—์„œ์˜ ์‹คํ—˜ ๊ฒฐ๊ณผ๋Š” ์ œ์•ˆ๋œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ํšจ๊ณผ์„ฑ๊ณผ ์šด์ „์ž ๊ฑฐ๋™๊ณผ์˜ ์œ ์‚ฌ์„ฑ์„ ๋ณด์—ฌ์ฃผ์—ˆ๋‹ค. ์‹ค์ฐจ ์‹คํ—˜ ๊ฒฐ๊ณผ๋Š” ๋น„ํ†ต์ œ ๊ต์ฐจ๋กœ๋ฅผ ํฌํ•จํ•œ ๋„์‹ฌ ์‹œ๋‚˜๋ฆฌ์˜ค์—์„œ์˜ ๊ฐ•๊ฑดํ•œ ์„ฑ๋Šฅ์„ ๋ณด์—ฌ์ฃผ์—ˆ๋‹ค.The foci of automotive researches have been expanding from passive safety systems to active safety systems with advances in sensing and processing technologies. Recently, the majority of automotive makers have already commercialized active safety systems, such as adaptive cruise control (ACC), lane keeping assistance (LKA), and autonomous emergency braking (AEB). Such advances have extended the research field beyond active safety systems to automated driving systems to achieve zero fatalities. Especially, automated driving on urban roads has become a key issue because urban roads possess numerous risk factors for traffic accidents, such as sidewalks, blind spots, on-street parking, motorcycles, and pedestrians, which cause higher accident rates and fatalities than motorways. Several projects have been conducted, and many others are still underway to evaluate the effects of automated driving in environmental, demographic, social, and economic aspects. For example, the European project AdaptIVe, develops various automated driving functions and defines specific evaluation methodologies. In addition, CityMobil2 successfully integrates driverless intelligent vehicles in nine other environments throughout Europe. In Japan, the Automated Driving System Research Project began on May 2014, which focuses on the development and verification of automated driving systems and next-generation urban transportation. From a careful review of a considerable amount of literature, automated driving systems have been proven to increase the safety of traffic users, reduce traffic congestion, and improve driver convenience. Various methodologies have been employed to develop the core technology of automated vehicles on urban roads, such as perception, motion planning, and control. However, the current state-of-the-art automated driving algorithms focus on the development of each technology separately. Consequently, designing automated driving systems from an integrated perspective is not yet sufficiently considered. Therefore, this dissertation focused on developing a fully autonomous driving algorithm in urban complex scenarios using LiDAR, vision, GPS, and a simple path map. The proposed autonomous driving algorithm covered the urban road scenarios with uncontrolled intersections based on vehicle motion prediction and model predictive control approach. Mainly, four research issues are considered: dynamic/static environment representation, and longitudinal/lateral motion planning. In the remainder of this thesis, we will provide an overview of the proposed motion planning algorithm for urban autonomous driving and the experimental results in real traffic, which showed the effectiveness and human-like behaviors of the proposed algorithm. The proposed algorithm has been tested and evaluated using both simulation and vehicle tests. The test results show the robust performance of urban scenarios, including uncontrolled intersections.Chapter 1 Introduction 1 1.1. Background and Motivation 1 1.2. Previous Researches 4 1.3. Thesis Objectives 9 1.4. Thesis Outline 10 Chapter 2 Overview of Motion Planning for Automated Driving System 11 Chapter 3 Dynamic Environment Representation with Motion Prediction 15 3.1. Moving Object Classification 17 3.2. Vehicle State based Direct Motion Prediction 20 3.2.1. Data Collection Vehicle 22 3.2.2. Target Roads 23 3.2.3. Dataset Selection 24 3.2.4. Network Architecture 25 3.2.5. Input and Output Features 33 3.2.6. Encoder and Decoder 33 3.2.7. Sequence Length 34 3.3. Road Structure based Interactive Motion Prediction 36 3.3.1. Maneuver Definition 38 3.3.2. Network Architecture 39 3.3.3. Path Following Model based State Predictor 47 3.3.4. Estimation of predictor uncertainty 50 3.3.5. Motion Parameter Estimation 53 3.3.6. Interactive Maneuver Prediction 56 3.4. Intersection Approaching Vehicle Motion Prediction 59 3.4.1. Driver Behavior Model at Intersections 59 3.4.2. Intention Inference based State Prediction 63 Chapter 4 Static Environment Representation 67 4.1. Static Obstacle Map Construction 69 4.2. Free Space Boundary Decision 74 4.3. Drivable Corridor Decision 76 Chapter 5 Longitudinal Motion Planning 81 5.1. In-Lane Target Following 82 5.2. Proactive Motion Planning for Narrow Road Driving 85 5.2.1. Motivation for Collision Preventive Velocity Planning 85 5.2.2. Desired Acceleration Decision 86 5.3. Uncontrolled Intersection 90 5.3.1. Driving Phase and Mode Definition 91 5.3.2. State Machine for Driving Mode Decision 92 5.3.3. Motion Planner for Approach Mode 95 5.3.4. Motion Planner for Risk Management Phase 98 Chapter 6 Lateral Motion Planning 105 6.1. Vehicle Model 107 6.2. Cost Function and Constraints 109 Chapter 7 Performance Evaluation 115 7.1. Motion Prediction 115 7.1.1. Prediction Accuracy Analysis of Vehicle State based Direct Motion Predictor 115 7.1.2. Prediction Accuracy and Effect Analysis of Road Structure based Interactive Motion Predictor 122 7.2. Prediction based Distance Control at Urban Roads 132 7.2.1. Driving Data Analysis of Direct Motion Predictor Application at Urban Roads 133 7.2.2. Case Study of Vehicle Test at Urban Roads 138 7.2.3. Analysis of Vehicle Test Results on Urban Roads 147 7.3. Complex Urban Roads 153 7.3.1. Case Study of Vehicle Test at Complex Urban Roads 154 7.3.2. Closed-loop Simulation based Safety Analysis 162 7.4. Uncontrolled Intersections 164 7.4.1. Simulation based Algorithm Comparison of Motion Planner 164 7.4.2. Monte-Carlo Simulation based Safety Analysis 166 7.4.3. Vehicle Tests Results in Real Traffic Conditions 172 7.4.4. Similarity Analysis between Human and Automated Vehicle 194 7.5. Multi-Lane Turn Intersections 197 7.5.1. Case Study of a Multi-Lane Left Turn Scenario 197 7.5.2. Analysis of Motion Planning Application Results 203 Chapter 8 Conclusion & Future Works 207 8.1. Conclusion 207 8.2. Future Works 209 Bibliography 210 Abstract in Korean 219Docto

    ๊ต์ฐจ๋กœ์—์„œ ์ž์œจ์ฃผํ–‰ ์ฐจ๋Ÿ‰์˜ ์ œํ•œ๋œ ๊ฐ€์‹œ์„ฑ๊ณผ ๋ถˆํ™•์‹ค์„ฑ์„ ๊ณ ๋ คํ•œ ์ข…๋ฐฉํ–ฅ ๊ฑฐ๋™๊ณ„ํš

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ๊ธฐ๊ณ„๊ณตํ•™๋ถ€, 2023. 2. ์ด๊ฒฝ์ˆ˜.This dissertation presents a novel longitudinal motion planning of autonomous vehicle at urban intersection to overcome the limited visibility due to complicated road structures and sensor specification, guaranteeing the safety from the potential collision with vehicles appearing from the occluded region. The intersection autonomous driving requires high level of safety due to congested traffics and environmental complexities. Due to complicated road structures and the detection range of perception sensors, the occluded region is generated in urban autonomous driving. The virtual target is one of the motion planning methods to react the sudden appearance of vehicles from the blind spot. The Gaussian Process Regression (GPR) is implemented to train the virtual target model to generate various future driving trajectories interacting with the motion of the ego vehicle. The GPR model provides not only the predicted trajectories of the virtual target but also the uncertainty of the future motion. Therefore, prediction results from GPR can be utilized to a position constraint for the Model Predictive Control (MPC), and the uncertainties are taken into account as a chance constraint in the MPC. In order to comprehend the surrounding environment including dynamic objects, a region of interest (ROI) is defined to determine targets of the interest. With the pre-determined driving route of the ego vehicle and the route information of the intersection, driving lanes intersecting with the ego driving lane can be determined, and the intersecting lanes are defined as ROI, reducing the computational load by eliminating targets of disinterest. Then the future motion of the selected target is predicted by a Long Short-Term Memory-Recurrent Neural Network (LSTM-RNN). Driving data for training are directly obtained with two different autonomous vehicles, providing their odometry information regardless to the limited field of view (FOV). For a widely known autonomous driving datasets such as Waymo and nuScenes, the vehicle odometry information are collected from the perceptive sensors mounted on the test vehicle. Thus, information of target that are out of the FOV of the test vehicle cant be obtained. The obtained training data are organized in the target centered coordinates for better input-domain adaptation and generalization. The mean squared error and the negative log likelihood loss functions are adapted to train and provide the uncertainty information of the target vehicle for the motion planning of the autonomous vehicle. The MPC with a chance constraint is formulated to optimize the longitudinal motion of the autonomous vehicle. The dynamic and actuator constraints are designed to provide ride comfort and safety to drivers. The position constraint with the chance constraint guarantees the safety and prevent the potential collision with target vehicles. The position constraint for the travel distance over the prediction horizon time is determined based on the clearance between the predicted trajectories of the target and ego vehicle at every prediction sample time. The performance and feasibility of the proposed algorithm are evaluated via computer simulation and test-data based simulation. The offline simulation validates the safety of the proposed algorithm, and the suggested motion planner has been implemented on an autonomous driving vehicle and tested in a real road. Through the implementation of the algorithm to an actual vehicle, the suggested algorithm is confirmed to be applicable in real life autonomous driving.๋ณธ ๋…ผ๋ฌธ์€ ๋ณต์žกํ•œ ๋„๋กœ ๊ตฌ์กฐ์™€ ์„ผ์„œ ์‚ฌ์–‘์œผ๋กœ ์ธํ•œ ์‹œ์•ผ ์ œํ•œ์„ ๊ทน๋ณตํ•˜๋ฉฐ ์‚ฌ๊ฐ์ง€๋Œ€์—์„œ ๋“ฑ์žฅํ•˜๋Š” ์ฐจ๋Ÿ‰๊ณผ์˜ ์ž ์žฌ์ ์ธ ์ถฉ๋Œ๋กœ๋ถ€ํ„ฐ ์•ˆ์ „์„ ๋ณด์žฅํ•˜๊ธฐ ์œ„ํ•œ ๋„์‹ฌ ๊ต์ฐจ๋กœ์—์„œ์˜ ์ž์œจ์ฃผํ–‰์ฐจ์˜ ์ƒˆ๋กœ์šด ์ข…๋ฐฉํ–ฅ ๊ฑฐ๋™ ๊ณ„ํš์„ ์ œ์‹œํ•œ๋‹ค. ๋„์‹ฌ ์ž์œจ์ฃผํ–‰์€ ๊ตํ†ต์ฒด์ฆ๊ณผ ํ™˜๊ฒฝ์˜ ๋ณต์žก์„ฑ์œผ๋กœ ์ธํ•ด ๋†’์€ ์ˆ˜์ค€์˜ ์•ˆ์ „์„ฑ์ด ์š”๊ตฌ๋ฉ๋‹ˆ๋‹ค. ๋ณต์žกํ•œ ๋„๋กœ ๊ตฌ์กฐ์™€ ์ธ์ง€ ์„ผ์„œ์˜ ์ธ์ง€ ๋ฒ”์œ„๋กœ ์ธํ•ด ๋„์‹ฌ ์ž์œจ์ฃผํ–‰์—์„œ๋Š” ์‚ฌ๊ฐ์ง€๋Œ€๊ฐ€ ๋ฐœ์ƒํ•œ๋‹ค. ๊ฐ€์ƒ ํƒ€๊ฒŸ์€ ์‚ฌ๊ฐ์ง€๋Œ€์—์„œ ์ฐจ๋Ÿ‰์˜ ๊ฐ‘์ž‘์Šค๋Ÿฌ์šด ์ถœํ˜„์— ๋Œ€์‘ํ•˜๊ธฐ ์œ„ํ•œ ๊ฑฐ๋™ ๊ณ„ํš ๋ฐฉ๋ฒ• ์ค‘ ํ•˜๋‚˜์ž…๋‹ˆ๋‹ค. ์ž์ฐจ๋Ÿ‰์˜ ๊ฑฐ๋™๊ณผ ์ƒํ˜ธ์ž‘์šฉํ•˜๋Š” ๋‹ค์–‘ํ•œ ๋ฏธ๋ž˜ ์ฃผํ–‰ ๊ถค์ ์„ ์ƒ์„ฑํ•˜๋Š” ๊ฐ€์ƒ ํƒ€๊ฒŸ ๋ชจ๋ธ์„ ๊ตฌํ˜„ํ•˜๊ธฐ ์œ„ํ•˜์—ฌ Gaussian Process Regression (GPR) ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•ฉ๋‹ˆ๋‹ค. GPR ๋ชจ๋ธ์€ ๊ฐ€์ƒ ํ‘œ์ ์˜ ์˜ˆ์ธก๋œ ๊ถค์ ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ๋ฏธ๋ž˜ ๊ถค์ ์— ๋Œ€ํ•œ ๋ถˆํ™•์‹ค์„ฑ๋„ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ GPR์˜ ์˜ˆ์ธก ๊ฒฐ๊ณผ๋Š” Model Predictive Control (MPC)์— ๋Œ€ํ•œ ์œ„์น˜ ์ œ์•ฝ ์กฐ๊ฑด์œผ๋กœ ํ™œ์šฉ๋  ์ˆ˜ ์žˆ์œผ๋ฉฐ ๋ถˆํ™•์‹ค์„ฑ์€ MPC์—์„œ ๊ธฐํšŒ ์ œ์•ฝ ์กฐ๊ฑด์œผ๋กœ ๊ณ ๋ ค๋ฉ๋‹ˆ๋‹ค. ๋™์  ๊ฐ์ฒด๋ฅผ ํฌํ•จํ•œ ์ฃผ๋ณ€ ํ™˜๊ฒฝ์„ ํŒŒ์•…ํ•˜๊ธฐ ์œ„ํ•ด ๊ด€์‹ฌ์˜์—ญ์„ ์ •์˜ํ•˜์—ฌ ๋ชฉํ‘œ ๋Œ€์ƒ์„ ๊ฒฐ์ •ํ•ฉ๋‹ˆ๋‹ค. ๋ฏธ๋ฆฌ ๊ฒฐ์ •๋œ ์ž์ฐจ๋Ÿ‰์˜ ์ฃผํ–‰๊ฒฝ๋กœ์™€ ๊ต์ฐจ๋กœ์˜ ๊ฒฝ๋กœ์ •๋ณด๋ฅผ ํ†ตํ•˜์—ฌ ์ž์ฐจ๋Ÿ‰์˜ ์ฃผํ–‰์ฐจ๋กœ์™€ ๊ต์ฐจํ•˜๋Š” ๋‹ค๋ฅธ ์ฐจ์„ ์„ ํŒ๋‹จํ•˜์—ฌ ๊ด€์‹ฌ์˜์—ญ์œผ๋กœ ์ •์˜ํ•จ์œผ๋กœ์จ ๊ด€์‹ฌ์˜์—ญ ๋ฐ–์˜ ์ฐจ๋Ÿ‰์„ ์ œ์™ธํ•˜์—ฌ ์—ฐ์‚ฐ๋Ÿ‰์„ ๊ฐ์†Œ์‹œํ‚ฌ ์ˆ˜ ์žˆ๋‹ค. ๋‹ค์Œ์œผ๋กœ ์ธ์ง€๋œ ์ฐจ๋Ÿ‰์˜ ๋ฏธ๋ž˜ ์ด๋™ ๊ถค์ ์€ LSTM-RNN (Long Short-Term Memory Recurrent Neural Network)์— ์˜ํ•ด ์˜ˆ์ธก๋ฉ๋‹ˆ๋‹ค. ํ›ˆ๋ จ์„ ์œ„ํ•œ ์ฃผํ–‰ ๋ฐ์ดํ„ฐ๋Š” ๋‘ ๋Œ€์˜ ์ž์œจ์ฃผํ–‰ ์ฐจ๋Ÿ‰์—์„œ ์ง์ ‘ ํš๋“ํ•˜์—ฌ ์ œํ•œ๋œ ์‹œ์•ผ์— ๊ด€๊ณ„์—†์ด ์ฐจ๋Ÿ‰์˜ ์ƒํƒœ ์ •๋ณด๋ฅผ ์ œ๊ณตํ•ฉ๋‹ˆ๋‹ค. ๊ตฌ๊ธ€ Waymo ๋ฐ nuScenes์™€ ๊ฐ™์ด ๋„๋ฆฌ ์•Œ๋ ค์ง„ ์ž์œจ์ฃผํ–‰ ๋ฐ์ดํ„ฐ์˜ ๊ฒฝ์šฐ ์ฐจ๋Ÿ‰ ์ƒํƒœ ์ •๋ณด๋Š” ํ…Œ์ŠคํŠธ ์ฐจ๋Ÿ‰์— ์žฅ์ฐฉ๋œ ์ธ์ง€ ์„ผ์„œ์—์„œ ์ˆ˜์ง‘๋ฉ๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ํ…Œ์ŠคํŠธ ์ฐจ๋Ÿ‰์˜ ์‹œ์•ผ์—์„œ ๋ฒ—์–ด๋‚˜ ์žˆ๋Š” ์ฐจ๋Ÿ‰ ์ •๋ณด๋Š” ์–ป์„ ์ˆ˜ ์—†์Šต๋‹ˆ๋‹ค. ์ทจ๋“ํ•œ ์ฃผํ–‰ ๋ฐ์ดํ„ฐ๋Š” ๋” ๋‚˜์€ ์ž…๋ ฅ ๋ฐ์ดํ„ฐ ์ ์‘ ๋ฐ ์ผ๋ฐ˜ํ™”๋ฅผ ์œ„ํ•ด ์ž์ฐจ๊ฐ€ ์•„๋‹Œ ํƒ€๊ฒŸ์ฐจ๋Ÿ‰ ์ค‘์‹ฌ ์ขŒํ‘œ๋กœ ๊ตฌ์„ฑ๋ฉ๋‹ˆ๋‹ค. ์†์‹คํ•จ์ˆ˜๋กœ ํ‰๊ท  ์ œ๊ณฑ ์˜ค์ฐจ ๋ฐ ์Œ์˜ ๋กœ๊ทธ ์šฐ๋„ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜์˜€๊ณ  ์Œ์˜ ๋กœ๊ทธ ์šฐ๋„ํ•จ์ˆ˜๋Š” ์ž์œจ์ฃผํ–‰ ์ฐจ๋Ÿ‰์˜ ๊ฑฐ๋™๊ณ„ํš์— ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ๊ฒŒ ํƒ€๊ฒŸ์ฐจ๋Ÿ‰์˜ ๋ฏธ๋ž˜ ๊ถค์ ์— ๋Œ€ํ•œ ๋ถˆํ™•์‹ค์„ฑ ์ •๋ณด๋ฅผ ์ œ๊ณตํ•œ๋‹ค. ๊ธฐํšŒ ์ œ์•ฝ ์กฐ๊ฑด์ด ์žˆ๋Š” MPC๋Š” ์ž์œจ์ฐจ๋Ÿ‰์˜ ์ข…๋ฐฉํ–ฅ ๊ฑฐ๋™์„ ์ตœ์ ํ™”ํ•˜๋„๋ก ๊ตฌํ˜„๋ฉ๋‹ˆ๋‹ค. ๋™์  ์ œ์•ฝ ์กฐ๊ฑด ๋ฐ ๊ตฌ๋™๊ธฐ ์ œ์•ฝ ์กฐ๊ฑด์€ ์šด์ „์ž์—๊ฒŒ ์Šน์ฐจ๊ฐ๊ณผ ์•ˆ์ „์„ ์ œ๊ณตํ•˜๋„๋ก ์„ค๊ณ„๋˜์—ˆ์Šต๋‹ˆ๋‹ค. ๊ธฐํšŒ ์ œ์•ฝ ์กฐ๊ฑด์€ ์œ„์น˜ ์ œ์•ฝ ์กฐ๊ฑด์„ ๊ฐ•๊ฑดํ•˜๊ฒŒ ํ•˜์—ฌ ์•ˆ์ „์„ ๋ณด์žฅํ•˜๊ณ  ๋Œ€์ƒ ์ฐจ๋Ÿ‰๊ณผ์˜ ์ž ์žฌ์ ์ธ ์ถฉ๋Œ์„ ๋ฐฉ์ง€ํ•ฉ๋‹ˆ๋‹ค. ์˜ˆ์ธก ์‹œ๊ฐ„๋™์•ˆ ์ด๋™ ๊ฑฐ๋ฆฌ์— ๋Œ€ํ•œ ์œ„์น˜ ์ œ์•ฝ ์กฐ๊ฑด์€ ๊ฐ ์˜ˆ์ธก์‹œ๊ฐ„์˜ ํƒ€๊ฒŸ๊ณผ ์ž์ฐจ๋Ÿ‰์˜ ์˜ˆ์ธก๋œ ๊ถค์  ๊ฐ„์˜ ๊ฑฐ๋ฆฌ ์ฐจ์ด์— ์˜ํ•ด ๊ฒฐ์ •๋œ๋‹ค. ์ œ์•ˆํ•œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ์„ฑ๋Šฅ๊ณผ ํƒ€๋‹น์„ฑ์€ ์ปดํ“จํ„ฐ ์‹œ๋ฎฌ๋ ˆ์ด์…˜๊ณผ ํ…Œ์ŠคํŠธ ๋ฐ์ดํ„ฐ ๊ธฐ๋ฐ˜ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์„ ํ†ตํ•ด ํ‰๊ฐ€๋œ๋‹ค. ์˜คํ”„๋ผ์ธ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์„ ํ†ตํ•ด ์ œ์•ˆํ•œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ์•ˆ์ „์„ฑ์„ ๊ฒ€์ฆํ•˜์˜€์œผ๋ฉฐ ์ œ์•ˆํ•œ ๊ฑฐ๋™๊ณ„ํš ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ž์œจ์ฃผํ–‰์ฐจ์— ๊ตฌํ˜„ํ•˜์—ฌ ์‹ค์ œ ๋„๋กœ์—์„œ ํ…Œ์ŠคํŠธํ•˜์˜€๋‹ค. ์ œ์•ˆํ•œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์‹ค์ œ ์ฐจ๋Ÿ‰์— ๊ตฌํ˜„ํ•˜์—ฌ ์‹ค์ œ ์ž์œจ์ฃผํ–‰์— ์ ์šฉํ•  ์ˆ˜ ์žˆ์Œ์„ ํ™•์ธํ•˜์˜€๋‹ค.Chapter 1. Introduction 1 1.1. Research Background and Motivation of Intersection Autonomous Driving 1 1.2. Previous Researches on Intersection Autonomous Driving 9 1.2.1. Research on Trajectory Prediction and Intention Inference at Urban Intersection 10 1.2.2. Research on Intersection Motion Planning 11 1.3. Thesis Objectives 18 1.4. Thesis Outline 19 Chapter 2. Overall Architecture of Intersection Autonomous Driving System 22 2.1. Software Configuration of Intersection Autonomous Driving 22 2.2. Hardware Configuration of Autonomous Driving and Test Vehicle 24 2.3. Vehicle Test Environment for Intersection Autonomous Driving 25 Chapter 3. Virtual Target Modelling for Intersection Motion Planning 27 3.1. Limitation of Conventional Virtual Target Model for Intersection 27 3.2. Virtual Target Generation for Intersection Occlusion 31 3.3. Intersection Virtual Target Modeling 34 3.3.1. Gaussian Process Regression based Virtual Target Model at Intersection 35 3.3.2. Data Processing for Gaussian Process Regression based Virtual Target Model 38 3.3.3. Definition of Visibility Index of Virtual Target at Intersection 45 3.3.4. Long Short-Term Memory based Virtual Target Model at Intersection 51 Chapter 4. Surrounding Vehicle Motion Prediction at Intersection 54 4.1. Intersection Surrounding Vehicle Classification 54 4.2. Data-driven Vehicle State based Motion Prediction at Intersection 58 4.2.1. Network Architecture of Motion Predictor 58 4.2.2. Dataset Processing of the Network 65 Chapter 5. Intersection Longitudinal Motion Planning 68 5.1. Outlines of Longitudinal Motion Planning with Model Predictive Control 68 5.2. Stochastic Model Predictive Control of Intersection Motion Planner 69 5.2.1. Definition of System Dynamics Model 69 5.2.2. Ego Vehicle Prediction and Reference States Definition 70 5.2.3. Safety Clearance Decision for Intersection Collision Avoidance 71 5.2.4. Driving Mode Decision of Intersection Motion Planning 79 5.2.5. Formulation of Model Predictive Control with the Chance Constraint 83 Chapter 6. Performance Evaluation of Intersection Longitudinal Motion Planning 86 6.1. Performance Evaluation of Virtual Target Prediction at Intersection 86 6.1.1. GPR based Virtual Target Model Prediction Results 86 6.1.2. Intersection Autonomous Driving Computer Simulation Environment 90 6.1.2.1. Simulation Result of Effect of Virtual Target in Intersection Autonomous Driving 92 6.1.2.2. Virtual Target Simulation Result of the Right Turn Across Path Scenario in the Intersection 96 6.1.2.3. Virtual Target Simulation Result of the Straight Across Path Scenario in the Intersection 102 6.1.2.4. Virtual Target Simulation Result of the Left Turn Across Path Scenario in the Intersection 108 6.1.2.5. Virtual Target Simulation Result of Crooked T-shaped Intersection 113 6.2. Performance Evaluation of Data-driven Vehicle State based Motion Prediction at Intersection 124 6.2.1. Data-driven Motion Prediction Accuracy Analysis 124 6.2.2. Prediction Trajectory Accuracy Analysis 134 6.3. Vehicle Test for Intersection Autonomous Driving 146 6.3.1. Test Vehicle Configuration for Intersection Autonomous Driving 146 6.3.2. Software Configuration for Autonomous Vehicle Operation 147 6.3.3. Vehicle Test Environment for Intersection Autonomous Driving 148 6.3.4. Vehicle Test Result of Intersection Autonomous Driving 151 Chapter 7. Conclusion and Future Work 161 7.1. Conclusion 161 7.2. Future Work 164 Bibliography 166 Abstract in Korean 172๋ฐ•

    Belief State Planning for Autonomous Driving: Planning with Interaction, Uncertain Prediction and Uncertain Perception

    Get PDF
    This thesis presents a behavior planning algorithm for automated driving in urban environments with an uncertain and dynamic nature. The uncertainty in the environment arises by the fact that the intentions as well as the future trajectories of the surrounding drivers cannot be measured directly but can only be estimated in a probabilistic fashion. Even the perception of objects is uncertain due to sensor noise or possible occlusions. When driving in such environments, the autonomous car must predict the behavior of the other drivers and plan safe, comfortable and legal trajectories. Planning such trajectories requires robust decision making when several high-level options are available for the autonomous car. Current planning algorithms for automated driving split the problem into different subproblems, ranging from discrete, high-level decision making to prediction and continuous trajectory planning. This separation of one problem into several subproblems, combined with rule-based decision making, leads to sub-optimal behavior. This thesis presents a global, closed-loop formulation for the motion planning problem which intertwines action selection and corresponding prediction of the other agents in one optimization problem. The global formulation allows the planning algorithm to make the decision for certain high-level options implicitly. Furthermore, the closed-loop manner of the algorithm optimizes the solution for various, future scenarios concerning the future behavior of the other agents. Formulating prediction and planning as an intertwined problem allows for modeling interaction, i.e. the future reaction of the other drivers to the behavior of the autonomous car. The problem is modeled as a partially observable Markov decision process (POMDP) with a discrete action and a continuous state and observation space. The solution to the POMDP is a policy over belief states, which contains different reactive plans for possible future scenarios. Surrounding drivers are modeled with interactive, probabilistic agent models to account for their prediction uncertainty. The field of view of the autonomous car is simulated ahead over the whole planning horizon during the optimization of the policy. Simulating the possible, corresponding, future observations allows the algorithm to select actions that actively reduce the uncertainty of the world state. Depending on the scenario, the behavior of the autonomous car is optimized in (combined lateral and) longitudinal direction. The algorithm is formulated in a generic way and solved online, which allows for applying the algorithm on various road layouts and scenarios. While such a generic problem formulation is intractable to solve exactly, this thesis demonstrates how a sufficiently good approximation to the optimal policy can be found online. The problem is solved by combining state of the art Monte Carlo tree search algorithms with near-optimal, domain specific roll-outs. The algorithm is evaluated in scenarios such as the crossing of intersections under unknown intentions of other crossing vehicles, interactive lane changes in narrow gaps and decision making at intersections with large occluded areas. It is shown that the behavior of the closed-loop planner is less conservative than comparable open-loop planners. More precisely, it is even demonstrated that the policy enables the autonomous car to drive in a similar way as an omniscient planner with full knowledge of the scene. It is also demonstrated how the autonomous car executes actions to actively gather more information about the surrounding and to reduce the uncertainty of its belief state

    Belief State Planning for Autonomous Driving: Planning with Interaction, Uncertain Prediction and Uncertain Perception

    Get PDF
    This work presents a behavior planning algorithm for automated driving in urban environments with an uncertain and dynamic nature. The algorithm allows to consider the prediction uncertainty (e.g. different intentions), perception uncertainty (e.g. occlusions) as well as the uncertain interactive behavior of the other agents explicitly. Simulating the most likely future scenarios allows to find an optimal policy online that enables non-conservative planning under uncertainty

    Dynamic Lambda-Field: A Counterpart of the Bayesian Occupancy Grid for Risk Assessment in Dynamic Environments

    Full text link
    In the context of autonomous vehicles, one of the most crucial tasks is to estimate the risk of the undertaken action. While navigating in complex urban environments, the Bayesian occupancy grid is one of the most popular types of maps, where the information of occupancy is stored as the probability of collision. Although widely used, this kind of representation is not well suited for risk assessment: because of its discrete nature, the probability of collision becomes dependent on the tessellation size. Therefore, risk assessments on Bayesian occupancy grids cannot yield risks with meaningful physical units. In this article, we propose an alternative framework called Dynamic Lambda-Field that is able to assess generic physical risks in dynamic environments without being dependent on the tessellation size. Using our framework, we are able to plan safe trajectories where the risk function can be adjusted depending on the scenario. We validate our approach with quantitative experiments, showing the convergence speed of the grid and that the framework is suitable for real-world scenarios.Comment: 2021 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other work

    Development,Validation, and Integration of AI-Driven Computer Vision System and Digital-twin System for Traffic Safety Dignostics

    Get PDF
    The use of data and deep learning algorithms in transportation research have become increasingly popular in recent years. Many studies rely on real-world data. Collecting accurate traffic data is crucial for analyzing traffic safety. Still, traditional traffic data collection methods that rely on loop detectors and radar sensors are limited to collect macro-level data, and it may fail to monitor complex driver behaviors like lane changing and interactions between road users. With the development of new technologies like in-vehicle cameras, Unmanned Aerial Vehicle (UAV), and surveillance cameras, vehicle trajectory data can be collected from the recorded videos for more comprehensive and microscopic traffic safety analysis. This research presents the development, validation, and integration of three AI-driven computer vision systems for vehicle trajectory extraction and traffic safety research: 1) A.R.C.I.S, an automated framework for safety diagnosis utilizing multi-object detection and tracking algorithm for UAV videos. 2)N.M.E.D.S., A new framework with the ability to detect and predict the key points of vehicles and provide more precise vehicle occupying locations for traffic safety analysis. 3)D.V.E.D.S applied deep learning models to extract information related to drivers\u27 visual environment from the Google Street View (GSV) images. Based on the drone video collected and processed by A.R.C.I.S at various locations, CitySim: a new drone recorded vehicle trajectory dataset that aim to facilitate safety research was introduced. CitySim has vehicle interaction trajectories extracted from 1140- minutes of video recordings, which provide a large-scale naturalistic vehicle trajectory that covers a variety of locations, including basic freeway segments, freeway weaving segments, expressway segments, signalized intersections, stop-controlled intersections, and unique intersections without sign/signal control. The advantage of CitySim over other datasets is that it contains more critical safety events in quantity and severity and provides supporting scenarios for safety-oriented research. In addition, CitySim provides digital twin features, including the 3D base maps and signal timings, which enables a more comprehensive testing environment for safety research, such as autonomous vehicle safety. Based on these digital twin features provided by CitySim, we proposed a Digital Twin framework for CV and pedestrian in-the-loop simulation, which is based on Carla-Sumo Co-simulation and Cave automatic virtual environment (CAVE). The proposed framework is expected to guide the future Digital Twin research, and the architecture we build can serve as the testbed for further research and development
    • โ€ฆ
    corecore