2,590 research outputs found

    DeepNav: Learning to Navigate Large Cities

    Full text link
    We present DeepNav, a Convolutional Neural Network (CNN) based algorithm for navigating large cities using locally visible street-view images. The DeepNav agent learns to reach its destination quickly by making the correct navigation decisions at intersections. We collect a large-scale dataset of street-view images organized in a graph where nodes are connected by roads. This dataset contains 10 city graphs and more than 1 million street-view images. We propose 3 supervised learning approaches for the navigation task and show how A* search in the city graph can be used to generate supervision for the learning. Our annotation process is fully automated using publicly available mapping services and requires no human input. We evaluate the proposed DeepNav models on 4 held-out cities for navigating to 5 different types of destinations. Our algorithms outperform previous work that uses hand-crafted features and Support Vector Regression (SVR)[19].Comment: CVPR 2017 camera ready versio

    Augmented Reality and GPS-Based Resource Efficient Navigation System for Outdoor Environments: Integrating Device Camera, Sensors, and Storage

    Get PDF
    Contemporary navigation systems rely upon localisation accuracy and humongous spatial data for navigational assistance. Such spatial-data sources may have access restrictions or quality issues and require massive storage space. Affordable high-performance mobile consumer hardware and smart software have resulted in the popularity of AR and VR technologies. These technologies can help to develop sustainable devices for navigation. This paper introduces a robust, memory-efficient, augmented-reality-based navigation system for outdoor environments using crowdsourced spatial data, a device camera, and mapping algorithms. The proposed system unifies the basic map information, points of interest, and individual GPS trajectories of moving entities to generate and render the mapping information. This system can perform map localisation, pathfinding, and visualisation using a low-power mobile device. A case study was undertaken to evaluate the proposed system. It was observed that the proposed system resulted in a 29 percent decrease in CPU load and a 35 percent drop in memory requirements. As spatial information was stored as comma-separated values, it required almost negligible storage space compared to traditional spatial databases. The proposed navigation system attained a maximum accuracy of 99 percent with a root mean square error value of 0.113 and a minimum accuracy of 96 percent with a corresponding root mean square value of 0.17

    CAR-Net: Clairvoyant Attentive Recurrent Network

    Full text link
    We present an interpretable framework for path prediction that leverages dependencies between agents' behaviors and their spatial navigation environment. We exploit two sources of information: the past motion trajectory of the agent of interest and a wide top-view image of the navigation scene. We propose a Clairvoyant Attentive Recurrent Network (CAR-Net) that learns where to look in a large image of the scene when solving the path prediction task. Our method can attend to any area, or combination of areas, within the raw image (e.g., road intersections) when predicting the trajectory of the agent. This allows us to visualize fine-grained semantic elements of navigation scenes that influence the prediction of trajectories. To study the impact of space on agents' trajectories, we build a new dataset made of top-view images of hundreds of scenes (Formula One racing tracks) where agents' behaviors are heavily influenced by known areas in the images (e.g., upcoming turns). CAR-Net successfully attends to these salient regions. Additionally, CAR-Net reaches state-of-the-art accuracy on the standard trajectory forecasting benchmark, Stanford Drone Dataset (SDD). Finally, we show CAR-Net's ability to generalize to unseen scenes.Comment: The 2nd and 3rd authors contributed equall

    ์ค€์ •ํ˜•ํ™”๋œ ํ™˜๊ฒฝ์—์„œ Look-ahead Point๋ฅผ ์ด์šฉํ•œ ๋ชจ๋ฐฉํ•™์Šต ๊ธฐ๋ฐ˜ ์ž์œจ ๋‚ด๋น„๊ฒŒ์ด์…˜ ๋ฐฉ๋ฒ•

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต๋Œ€ํ•™์› : ์œตํ•ฉ๊ณผํ•™๊ธฐ์ˆ ๋Œ€ํ•™์› ์œตํ•ฉ๊ณผํ•™๋ถ€(์ง€๋Šฅํ˜•์œตํ•ฉ์‹œ์Šคํ…œ์ „๊ณต), 2023. 2. ๋ฐ•์žฌํฅ.๋ณธ ํ•™์œ„๋…ผ๋ฌธ์€ ์ž์œจ์ฃผํ–‰ ์ฐจ๋Ÿ‰์ด ์ฃผ์ฐจ์žฅ์—์„œ ์œ„์ƒ์ง€๋„์™€ ๋น„์ „ ์„ผ์„œ๋กœ ๋‚ด๋น„๊ฒŒ์ด์…˜์„ ์ˆ˜ํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•๋“ค์„ ์ œ์•ˆํ•ฉ๋‹ˆ๋‹ค. ์ด ํ™˜๊ฒฝ์—์„œ์˜ ์ž์œจ์ฃผํ–‰ ๊ธฐ์ˆ ์€ ์™„์ „ ์ž์œจ์ฃผํ–‰์„ ์™„์„ฑํ•˜๋Š” ๋ฐ ํ•„์š”ํ•˜๋ฉฐ, ํŽธ๋ฆฌํ•˜๊ฒŒ ์ด์šฉ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด ๊ธฐ์ˆ ์„ ๊ตฌํ˜„ํ•˜๊ธฐ ์œ„ํ•ด, ๊ฒฝ๋กœ๋ฅผ ์ƒ์„ฑํ•˜๊ณ  ์ด๋ฅผ ํ˜„์ง€ํ™” ๋ฐ์ดํ„ฐ๋กœ ์ถ”์ข…ํ•˜๋Š” ๋ฐฉ๋ฒ•์ด ์ผ๋ฐ˜์ ์œผ๋กœ ์—ฐ๊ตฌ๋˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜, ์ฃผ์ฐจ์žฅ์—์„œ๋Š” ๋„๋กœ ๊ฐ„ ๊ฐ„๊ฒฉ์ด ์ข๊ณ  ์žฅ์• ๋ฌผ์ด ๋ณต์žกํ•˜๊ฒŒ ๋ถ„ํฌ๋˜์–ด ์žˆ์–ด ํ˜„์ง€ํ™” ๋ฐ์ดํ„ฐ๋ฅผ ์ •ํ™•ํ•˜๊ฒŒ ์–ป๊ธฐ ํž˜๋“ญ๋‹ˆ๋‹ค. ์ด๋Š” ์‹ค์ œ ๊ฒฝ๋กœ์™€ ์ถ”์ข…ํ•˜๋Š” ๊ฒฝ๋กœ ์‚ฌ์ด์— ํ‹€์–ด์ง์„ ๋ฐœ์ƒ์‹œ์ผœ, ์ฐจ๋Ÿ‰๊ณผ ์žฅ์• ๋ฌผ ๊ฐ„ ์ถฉ๋Œ ๊ฐ€๋Šฅ์„ฑ์„ ๋†’์ž…๋‹ˆ๋‹ค. ๋”ฐ๋ผ์„œ ํ˜„์ง€ํ™” ๋ฐ์ดํ„ฐ๋กœ ๊ฒฝ๋กœ๋ฅผ ์ถ”์ข…ํ•˜๋Š” ๋Œ€์‹ , ๋‚ฎ์€ ๋น„์šฉ์„ ๊ฐ€์ง€๋Š” ๋น„์ „ ์„ผ์„œ๋กœ ์ฐจ๋Ÿ‰์ด ์ฃผํ–‰ ๊ฐ€๋Šฅ ์˜์—ญ์„ ํ–ฅํ•ด ์ฃผํ–‰ํ•˜๋Š” ๋ฐฉ๋ฒ•์ด ์ œ์•ˆ๋ฉ๋‹ˆ๋‹ค. ์ฃผ์ฐจ์žฅ์—๋Š” ์ฐจ์„ ์ด ์—†๊ณ  ๋‹ค์–‘ํ•œ ์ •์ /๋™์  ์žฅ์• ๋ฌผ์ด ๋ณต์žกํ•˜๊ฒŒ ์žˆ์–ด, ์ฃผํ–‰ ๊ฐ€๋Šฅ/๋ถˆ๊ฐ€๋Šฅํ•œ ์˜์—ญ์„ ๊ตฌ๋ถ„ํ•˜์—ฌ ์ ์œ  ๊ฒฉ์ž ์ง€๋„๋ฅผ ์–ป๋Š” ๊ฒƒ์ด ํ•„์š”ํ•ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, ๊ต์ฐจ๋กœ๋ฅผ ๋‚ด๋น„๊ฒŒ์ด์…˜ํ•˜๊ธฐ ์œ„ํ•ด, ์ „์—ญ ๊ณ„ํš์— ๋”ฐ๋ฅธ ํ•˜๋‚˜์˜ ๊ฐˆ๋ž˜ ๋„๋กœ๋งŒ์ด ์ฃผํ–‰๊ฐ€๋Šฅ ์˜์—ญ์œผ๋กœ ๊ตฌ๋ถ„๋ฉ๋‹ˆ๋‹ค. ๊ฐˆ๋ž˜ ๋„๋กœ๋Š” ํšŒ์ „๋œ ๋ฐ”์šด๋”ฉ ๋ฐ•์Šค ํ˜•ํƒœ๋กœ ์ธ์‹๋˜๋ฉฐ ์ฃผํ–‰๊ฐ€๋Šฅ ์˜์—ญ ์ธ์‹๊ณผ ํ•จ๊ป˜ multi-task ๋„คํŠธ์›Œํฌ๋ฅผ ํ†ตํ•ด ์–ป์–ด์ง‘๋‹ˆ๋‹ค. ์ฃผํ–‰์„ ์œ„ํ•ด ๋ชจ๋ฐฉํ•™์Šต์ด ์‚ฌ์šฉ๋˜๋ฉฐ, ์ด๋Š” ๋ชจ๋ธ-๊ธฐ๋ฐ˜ ๋ชจ์…˜ํ”Œ๋ž˜๋‹ ๋ฐฉ๋ฒ•๋ณด๋‹ค ํŒŒ๋ผ๋ฏธํ„ฐ ํŠœ๋‹ ์—†์ด๋„ ๋‹ค์–‘ํ•˜๊ณ  ๋ณต์žกํ•œ ํ™˜๊ฒฝ์„ ๋‹ค๋ฃฐ ์ˆ˜ ์žˆ๊ณ  ๋ถ€์ •ํ™•ํ•œ ์ธ์‹ ๊ฒฐ๊ณผ์—๋„ ๊ฐ•์ธํ•ฉ๋‹ˆ๋‹ค. ์•„์šธ๋Ÿฌ, ์ด๋ฏธ์ง€์—์„œ ์ œ์–ด ๋ช…๋ น์„ ๊ตฌํ•˜๋Š” ๊ธฐ์กด ๋ชจ๋ฐฉํ•™์Šต ๋ฐฉ๋ฒ•๊ณผ ๋‹ฌ๋ฆฌ, ์ ์œ  ๊ฒฉ์ž ์ง€๋„์—์„œ ์ฐจ๋Ÿ‰์ด ๋„๋‹ฌํ•  look-ahead point๋ฅผ ํ•™์Šตํ•˜๋Š” ์ƒˆ๋กœ์šด ๋ชจ๋ฐฉํ•™์Šต ๋ฐฉ๋ฒ•์ด ์ œ์•ˆ๋ฉ๋‹ˆ๋‹ค. ์ด point๋ฅผ ์‚ฌ์šฉํ•จ์œผ๋กœ์จ, ๋ชจ๋ฐฉ ํ•™์Šต์˜ ์„ฑ๋Šฅ์„ ํ–ฅ์ƒ์‹œํ‚ค๋Š” data aggregation (DAgger) ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๋ณ„๋„์˜ ์กฐ์ด์Šคํ‹ฑ ์—†์ด ์ž์œจ์ฃผํ–‰์— ์ ์šฉํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ, ์ „๋ฌธ๊ฐ€๋Š” human-in-loop DAgger ํ›ˆ๋ จ ๊ณผ์ •์—์„œ๋„ ์ตœ์ ์˜ ํ–‰๋™์„ ์ž˜ ์„ ํƒํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์ถ”๊ฐ€๋กœ, DAgger ๋ณ€ํ˜• ์•Œ๊ณ ๋ฆฌ์ฆ˜๋“ค์€ ์•ˆ์ „ํ•˜์ง€ ์•Š๊ฑฐ๋‚˜ ์ถฉ๋Œ์— ๊ฐ€๊นŒ์šด ์ƒํ™ฉ์— ๋Œ€ํ•œ ๋ฐ์ดํ„ฐ๋ฅผ ์ƒ˜ํ”Œ๋งํ•˜์—ฌ DAgger ์„ฑ๋Šฅ์ด ํ–ฅ์ƒ๋ฉ๋‹ˆ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜, ์ „์ฒด ํ›ˆ๋ จ ๋ฐ์ดํ„ฐ์…‹์—์„œ ์ด ์ƒํ™ฉ์— ๋Œ€ํ•œ ๋ฐ์ดํ„ฐ ๋น„์œจ์ด ์ ์œผ๋ฉด, ์ถ”๊ฐ€์ ์ธ DAgger ์ˆ˜ํ–‰ ๋ฐ ์‚ฌ๋žŒ์˜ ๋…ธ๋ ฅ์ด ์š”๊ตฌ๋ฉ๋‹ˆ๋‹ค. ์ด ๋ฌธ์ œ๋ฅผ ๋‹ค๋ฃจ๊ธฐ ์œ„ํ•ด, ๊ฐ€์ค‘ ์†์‹ค ํ•จ์ˆ˜๋ฅผ ์‚ฌ์šฉํ•˜๋Š” ์ƒˆ๋กœ์šด DAgger ํ›ˆ๋ จ ๋ฐฉ๋ฒ•์ธ WeightDAgger ์•Œ๊ณ ๋ฆฌ์ฆ˜์ด ์ œ์•ˆ๋˜๋ฉฐ, ๋” ์ ์€ DAgger ๋ฐ˜๋ณต์œผ๋กœ ์•ž์„œ ์–ธ๊ธ‰ ๊ฒƒ๊ณผ ์œ ์‚ฌํ•œ ์ƒํ™ฉ์—์„œ ์ „๋ฌธ๊ฐ€์˜ ํ–‰๋™์„ ๋” ์ •ํ™•ํ•˜๊ฒŒ ๋ชจ๋ฐฉํ•  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. DAgger๋ฅผ ๋™์  ์ƒํ™ฉ๊นŒ์ง€ ํ™•์žฅํ•˜๊ธฐ ์œ„ํ•ด, ์—์ด์ „ํŠธ์™€ ๊ฒฝ์Ÿํ•˜๋Š” ์ ๋Œ€์  ์ •์ฑ…์ด ์ œ์•ˆ๋˜๊ณ , ์ด ์ •์ฑ…์„ DAgger ์•Œ๊ณ ๋ฆฌ์ฆ˜์— ์ ์šฉํ•˜๊ธฐ ์œ„ํ•œ ํ›ˆ๋ จ ํ”„๋ ˆ์ž„์›Œํฌ๊ฐ€ ์ œ์•ˆ๋ฉ๋‹ˆ๋‹ค. ์—์ด์ „ํŠธ๋Š” ์ด์ „ DAgger ํ›ˆ๋ จ ๋‹จ๊ณ„์—์„œ ํ›ˆ๋ จ๋˜์ง€ ์•Š์€ ๋‹ค์–‘ํ•œ ์ƒํ™ฉ์— ๋Œ€ํ•ด ํ›ˆ๋ จ๋  ์ˆ˜ ์žˆ์„ ๋ฟ๋งŒ ์•„๋‹ˆ๋ผ ์‰ฌ์šด ์ƒํ™ฉ์—์„œ ์–ด๋ ค์šด ์ƒํ™ฉ๊นŒ์ง€ ์ ์ง„์ ์œผ๋กœ ํ›ˆ๋ จ๋  ์ˆ˜ ์žˆ์Šต๋‹ˆ๋‹ค. ์‹ค๋‚ด์™ธ ์ฃผ์ฐจ์žฅ์—์„œ์˜ ์ฐจ๋Ÿ‰ ๋‚ด๋น„๊ฒŒ์ด์…˜ ์‹คํ—˜์„ ํ†ตํ•ด, ๋ชจ๋ธ-๊ธฐ๋ฐ˜ ๋ชจ์…˜ ํ”Œ๋ž˜๋‹ ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ํ•œ๊ณ„ ๋ฐ ์ด๋ฅผ ๋‹ค๋ฃฐ ์ˆ˜ ์žˆ๋Š” ์ œ์•ˆํ•˜๋Š” ๋ชจ๋ฐฉํ•™์Šต ๋ฐฉ๋ฒ•์˜ ํšจ์šฉ์„ฑ์ด ๋ถ„์„๋ฉ๋‹ˆ๋‹ค. ๋˜ํ•œ, ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ์‹คํ—˜์„ ํ†ตํ•ด, ์ œ์•ˆ๋œ WeightDAgger๊ฐ€ ๊ธฐ์กด DAgger ์•Œ๊ณ ๋ฆฌ์ฆ˜๋“ค ๋ณด๋‹ค ๋” ์ ์€ DAgger ์ˆ˜ํ–‰ ๋ฐ ์‚ฌ๋žŒ์˜ ๋…ธ๋ ฅ์ด ํ•„์š”ํ•จ์„ ๋ณด์ด๋ฉฐ, ์ ๋Œ€์  ์ •์ฑ…์„ ์ด์šฉํ•œ DAgger ํ›ˆ๋ จ ๋ฐฉ๋ฒ•์œผ๋กœ ๋™์  ์žฅ์• ๋ฌผ์„ ์•ˆ์ „ํ•˜๊ฒŒ ํšŒํ”ผํ•  ์ˆ˜ ์žˆ์Œ์„ ๋ณด์ž…๋‹ˆ๋‹ค. ์ถ”๊ฐ€์ ์œผ๋กœ, ๋ถ€๋ก์—์„œ๋Š” ๋น„์ „ ๊ธฐ๋ฐ˜ ์ž์œจ ์ฃผ์ฐจ ์‹œ์Šคํ…œ ๋ฐ ์ฃผ์ฐจ ๊ฒฝ๋กœ๋ฅผ ๋น ๋ฅด๊ฒŒ ์ƒ์„ฑํ•  ์ˆ˜ ์žˆ๋Š” ๋ฐฉ๋ฒ•์ด ์†Œ๊ฐœ๋˜์–ด, ๋น„์ „๊ธฐ๋ฐ˜ ์ฃผํ–‰ ๋ฐ ์ฃผ์ฐจ๋ฅผ ์ˆ˜ํ–‰ํ•˜๋Š” ์ž์œจ ๋ฐœ๋ › ํŒŒํ‚น ์‹œ์Šคํ…œ์ด ์™„์„ฑ๋ฉ๋‹ˆ๋‹ค.This thesis proposes methods for performing autonomous navigation with a topological map and a vision sensor in a parking lot. These methods are necessary to complete fully autonomous driving and can be conveniently used by humans. To implement them, a method of generating a path and tracking it with localization data is commonly studied. However, in such environments, the localization data is inaccurate because the distance between roads is narrow, and obstacles are distributed complexly, which increases the possibility of collisions between the vehicle and obstacles. Therefore, instead of tracking the path with the localization data, a method is proposed in which the vehicle drives toward a drivable area obtained by vision having a low-cost. In the parking lot, there are complicated various static/dynamic obstacles and no lanes, so it is necessary to obtain an occupancy grid map by segmenting the drivable/non-drivable areas. To navigating intersections, one branch road according to a global plan is configured as the drivable area. The branch road is detected in a shape of a rotated bounding box and is obtained through a multi-task network that simultaneously recognizes the drivable area. For driving, imitation learning is used, which can handle various and complex environments without parameter tuning and is more robust to handling an inaccurate perception result than model-based motion-planning algorithms. In addition, unlike existing imitation learning methods that obtain control commands from an image, a new imitation learning method is proposed that learns a look-ahead point that a vehicle will reach on an occupancy grid map. By using this point, the data aggregation (DAgger) algorithm that improves the performance of imitation learning can be applied to autonomous navigating without a separate joystick, and the expert can select the optimal action well even in the human-in-loop DAgger training process. Additionally, DAgger variant algorithms improve DAgger's performance by sampling data for unsafe or near-collision situations. However, if the data ratio for these situations in the entire training dataset is small, additional DAgger iteration and human effort are required. To deal with this problem, a new DAgger training method using a weighted loss function (WeightDAgger) is proposed, which can more accurately imitate the expert action in the aforementioned situations with fewer DAgger iterations. To extend DAgger to dynamic situations, an adversarial agent policy competing with the agent is proposed, and a training framework to apply this policy to DAgger is suggested. The agent can be trained for a variety of situations not trained in previous DAgger training steps, as well as progressively trained from easy to difficult situations. Through vehicle navigation experiments in real indoor and outdoor parking lots, limitations of the model-based motion-planning algorithms and the effectiveness of the proposed method to deal with them are analyzed. Besides, it is shown that the proposed WeightDAgger requires less DAgger performance and human effort than the existing DAgger algorithms, and the vehicle can safely avoid dynamic obstacles with the DAgger training framework using the adversarial agent policy. Additionally, the appendix introduces a vision-based autonomous parking system and a method to quickly generate the parking path, completing the vision-based autonomous valet parking system that performs driving as well as parking.1 INTRODUCTION 1 1.1 Autonomous Driving System and Environments 1 1.2 Motivation 4 1.3 Contributions of Thesis 6 1.4 Overview of Thesis 8 2 MULTI-TASK PERCEPTION NETWORK FOR VISION-BASED NAVIGATION 9 2.1 Introduction 9 2.1.1 Related Works 10 2.2 Proposed Method 13 2.2.1 Bird's-Eye-View Image Transform 14 2.2.2 Multi-Task Perception Network 15 2.2.2.1 Drivable Area Segmentation (Occupancy Grid Map (OGM)) 16 2.2.2.2 Rotated Road Bounding Box Detection 18 2.2.3 Intersection Decision 21 2.2.3.1 Road Occupancy Grid Map (OGMroad) 22 2.2.4 Merged Occupancy Grid Map (OGMmer) 23 2.3 Experiment 25 2.3.1 Experimental Setup 25 2.3.1.1 Autonomous Vehicle 25 2.3.1.2 Multi-task Network Setup 27 2.3.1.3 Model-based Branch Road Detection Method 29 2.3.2 Experimental Results 30 2.3.2.1 Quantitative Analysis of Multi-Task Network 30 2.3.2.2 Comparison of Branch Road Detection Method 31 2.4 Conclusion 34 3 DATA AGGREGATION (DAGGER) ALGORITHM WITH LOOK-AHEAD POINT FOR AUTONOMOUS DRIVING IN SEMI-STRUCTURED ENVIRONMENT 35 3.1 Introduction 35 3.2 Related Works & Background 41 3.2.1 DAgger Algorithms for Autonomous Driving 41 3.2.2 Behavior Cloning 42 3.2.3 DAgger Algorithm 43 3.3 Proposed Method 45 3.3.1 DAgger with Look-ahead Point Composition (State & Action) 45 3.3.2 Loss Function 49 3.3.3 Data-sampling Function in DAgger 50 3.3.4 Reasons to Use Look-ahead Point As Action 52 3.4 Experimental Setup 54 3.4.1 Driving Policy Network Training 54 3.4.2 Model-based Motion-Planning Algorithms 56 3.5 Experimental Result 57 3.5.1 Quantitative Analysis of Driving Policy 58 3.5.1.1 Collision Rate 58 3.5.1.2 Safe Distance Range Ratio 59 3.5.2 Qualitative Analysis of Driving Policy 60 3.5.2.1 Limitations of Tentacle Algorithm 60 3.5.2.2 Limitations of VVF Algorithm 61 3.5.2.3 Limitations of Both Tentacle and VVF 62 3.5.2.4 Driving Results on Noisy Occupancy Grid Map 63 3.5.2.5 Intersection Navigation 65 3.6 Conclusion 68 4 WEIGHT DAGGER ALGORITHM FOR REDUCING IMITATION LEARNING ITERATIONS 70 4.1 Introduction 70 4.2 Related Works & Background 71 4.3 Proposed Method 74 4.3.1 Weighted Loss Function in WeightDAgger 75 4.3.2 Weight Update Process in Entire Training Dataset 78 4.4 Experiments 80 4.4.1 Experimental Setup 80 4.4.2 Experimental Results 82 4.4.2.1 Ablation Study According to ฯ„ 82 4.4.2.2 Ablation Study According to ฮต 83 4.4.2.3 Ablation Study According to ฮฑ 84 4.4.2.4 Driving Test Results 85 4.4.3 Walking Robot Experiments 86 4.5 Conclusion 87 5 DAGGER USING ADVERSARIAL AGENT POLICY FOR DYNAMIC SITUATIONS 89 5.1 Introduction 89 5.2 Related Works & Background 91 5.2.1 Motion-planning Algorithms for Dynamic Situations 91 5.2.2 DAgger Algorithm for Dynamic Situation 93 5.3 Proposed Method 95 5.3.1 DAgger Training Framework Using Adversarial Agent Policy 95 5.3.2 Applying to Oncoming Dynamic Obstacle Avoidance Task 97 5.3.2.1 Ego Agent Policy 98 5.3.2.2 Adversarial Agent Policy 100 5.4 Experiments 101 5.4.1 Experimental Setup 101 5.4.1.1 Ego Agent Policy Training 102 5.4.1.2 Adversarial Agent Policy Training 103 5.4.2 Experimental Result 103 5.4.2.1 Performance of Adversarial Agent Policy 103 5.4.2.2 Ego Agent Policy Performance Comparisons Trained with / without Adversarial Agent Policy 104 5.5 Conclusion 106 6 CONCLUSIONS 107 Appendix A 110 A.1 Vision-based Re-plannable Autonomous Parking System 110 A.1.1 Parking Spot Detection 112 A.1.2 Re-planning Method 113 A.2 Biased Target-tree* with RRT* Algorithm for Fast Parking Path Planning 115 A.2.1 Introduction 115 A.2.2 Proposed Method 117 A.2.3 Experiments 119 Abstract (In Korean) 143 Acknowledgement 145๋ฐ•
    • โ€ฆ
    corecore