211,532 research outputs found
๋์ฌ ๊ต์ฐจ๋ก์์์ ์์จ์ฃผํ์ ์ํ ์ฃผ๋ณ ์ฐจ๋ ๊ฒฝ๋ก ์์ธก ๋ฐ ๊ฑฐ๋ ๊ณํ ์๊ณ ๋ฆฌ์ฆ
ํ์๋
ผ๋ฌธ(๋ฐ์ฌ)--์์ธ๋ํ๊ต ๋ํ์ :๊ณต๊ณผ๋ํ ๊ธฐ๊ณํญ๊ณต๊ณตํ๋ถ,2020. 2. ์ด๊ฒฝ์.์ฐจ๋์ฉ ์ผ์ฑ ๋ฐ ์ฒ๋ฆฌ๊ธฐ์ ์ด ๋ฐ๋ฌํจ์ ๋ฐ๋ผ ์๋์ฐจ ๊ธฐ์ ์ฐ๊ตฌ๊ฐ ์๋ ์์ ๊ธฐ์ ์์ ๋ฅ๋ ์์ ๊ธฐ์ ๋ก ์ด์ ์ด ํ์ฅ๋๊ณ ์๋ค. ์ต๊ทผ, ์ฃผ์ ์๋์ฐจ ์ ์์ฌ๋ค์ ๋ฅ๋ํ ์ฐจ๊ฐ๊ฑฐ๋ฆฌ ์ ์ด, ์ฐจ์ ์ ์ง ๋ณด์กฐ, ๊ทธ๋ฆฌ๊ณ ๊ธด๊ธ ์๋ ์ ๋๊ณผ ๊ฐ์ ๋ฅ๋ ์์ ๊ธฐ์ ์ด ์ด๋ฏธ ์์
ํํ๊ณ ์๋ค. ์ด๋ฌํ ๊ธฐ์ ์ ์ง๋ณด๋ ์ฌ์๋ฅ ์ ๋ก๋ฅผ ๋ฌ์ฑํ๊ธฐ ์ํ์ฌ ๊ธฐ์ ์ฐ๊ตฌ ๋ถ์ผ๋ฅผ ๋ฅ๋ ์์ ๊ธฐ์ ์ ๋์ด์ ์์จ์ฃผํ ์์คํ
์ผ๋ก ํ์ฅ์ํค๊ณ ์๋ค. ํนํ, ๋์ฌ ๋๋ก๋ ์ธ๋, ์ฌ๊ฐ์ง๋, ์ฃผ์ฐจ์ฐจ๋, ์ด๋ฅ์ฐจ, ๋ณดํ์ ๋ฑ๊ณผ ๊ฐ์ ๊ตํต ์ํ ์์๋ฅผ ๋ง์ด ๊ฐ๊ณ ์๊ธฐ ๋๋ฌธ์ ๊ณ ์๋๋ก๋ณด๋ค ์ฌ๊ณ ๋ฐ์๋ฅ ๊ณผ ์ฌ์๋ฅ ์ด ๋์ผ๋ฉฐ, ์ด๋ ๋์ฌ ๋๋ก์์์ ์์จ์ฃผํ์ ํต์ฌ ์ด์๊ฐ ๋๊ณ ์๋ค. ๋ง์ ํ๋ก์ ํธ๋ค์ด ์์จ์ฃผํ์ ํ๊ฒฝ์ , ์ธ๊ตฌํ์ , ์ฌํ์ , ๊ทธ๋ฆฌ๊ณ ๊ฒฝ์ ์ ์ธก๋ฉด์์์ ์์จ์ฃผํ์ ํจ๊ณผ๋ฅผ ํ๊ฐํ๊ธฐ ์ํด ์ํ๋์๊ฑฐ๋ ์ํ ์ค์ ์๋ค. ์๋ฅผ ๋ค์ด, ์ ๋ฝ์ AdaptIVE๋ ๋ค์ํ ์์จ์ฃผํ ๊ธฐ๋ฅ์ ๊ฐ๋ฐํ์์ผ๋ฉฐ, ๊ตฌ์ฒด์ ์ธ ํ๊ฐ ๋ฐฉ๋ฒ๋ก ์ ๊ฐ๋ฐํ์๋ค. ๋ํ, CityMobil2๋ ์ ๋ฝ ์ ์ญ์ 9๊ฐ์ ๋ค๋ฅธ ํ๊ฒฝ์์ ๋ฌด์ธ ์ง๋ฅํ ์ฐจ๋์ ์ฑ๊ณต์ ์ผ๋ก ํตํฉํ์๋ค. ์ผ๋ณธ์์๋ 2014๋
5์์ ์์๋ Automated Driving System Research Project๋ ์์จ์ฃผํ ์์คํ
๊ณผ ์ฐจ์ธ๋ ๋์ฌ ๊ตํต ์๋จ์ ๊ฐ๋ฐ ๋ฐ ๊ฒ์ฆ์ ์ด์ ์ ๋ง์ถ์๋ค.
๊ธฐ์กด ์ฐ๊ตฌ๋ค์ ๋ํ ์กฐ์ฌ๋ฅผ ํตํด ์์จ์ฃผํ ์์คํ
์ ๊ตํต ์ฐธ์ฌ์๋ค์ ์์ ๋๋ฅผ ํฅ์์ํค๊ณ , ๊ตํต ํผ์ก์ ๊ฐ์์ํค๋ฉฐ, ์ด์ ์ ํธ์์ฑ์ ์ฆ์ง์ํค๋ ๊ฒ์ด ์ฆ๋ช
๋์๋ค. ๋ค์ํ ๋ฐฉ๋ฒ๋ก ๋ค์ด ์ธ์ง, ๊ฑฐ๋ ๊ณํ, ๊ทธ๋ฆฌ๊ณ ์ ์ด์ ๊ฐ์ ๋์ฌ ๋๋ก ์์จ์ฃผํ์ฐจ์ ํต์ฌ ๊ธฐ์ ๋ค์ ๊ฐ๋ฐํ๊ธฐ ์ํ์ฌ ์ฌ์ฉ๋์๋ค. ํ์ง๋ง ๋ง์ ์ต์ ์ ์์จ์ฃผํ ์ฐ๊ตฌ๋ค์ ๊ฐ ๊ธฐ์ ์ ๊ฐ๋ฐ์ ๋ณ๊ฐ๋ก ๊ณ ๋ คํ์ฌ ์งํํด์๋ค. ๊ฒฐ๊ณผ์ ์ผ๋ก ํตํฉ์ ์ธ ๊ด์ ์์์ ์์จ์ฃผํ ๊ธฐ์ ์ค๊ณ๋ ์์ง ์ถฉ๋ถํ ๊ณ ๋ ค๋์ด ์์๋ค.
๋ฐ๋ผ์, ๋ณธ ๋
ผ๋ฌธ์ ๋ณต์กํ ๋์ฌ ๋๋ก ํ๊ฒฝ์์ ๋ผ์ด๋ค, ์นด๋ฉ๋ผ, GPS, ๊ทธ๋ฆฌ๊ณ ๊ฐ๋จํ ๊ฒฝ๋ก ๋งต์ ๊ธฐ๋ฐํ ์์ ์์จ์ฃผํ ์๊ณ ๋ฆฌ์ฆ์ ๊ฐ๋ฐํ๋ ๊ฒ์ ๋ชฉํ๋ก ํ์๋ค. ์ ์๋ ์์จ์ฃผํ ์๊ณ ๋ฆฌ์ฆ์ ๋นํต์ ๊ต์ฐจ๋ก๋ฅผ ํฌํจํ ๋์ฌ ๋๋ก ์ํฉ์ ์ฐจ๋ ๊ฑฐ๋ ์์ธก๊ธฐ์ ๋ชจ๋ธ ์์ธก ์ ์ด ๊ธฐ๋ฒ์ ๊ธฐ๋ฐํ์ฌ ์ค๊ณ๋์๋ค. ๋ณธ ๋
ผ๋ฌธ์ ๋์ , ์ ์ ํ๊ฒฝ ํํ ๋ฐ ์ข
ํก๋ฐฉํฅ ๊ฑฐ๋ ๊ณํ์ ์ค์ ์ ์ผ๋ก ๋ค๋ฃจ์๋ค.
๋ณธ ๋
ผ๋ฌธ์ ๋์ฌ ๋๋ก ์์จ์ฃผํ์ ์ํ ๊ฑฐ๋ ๊ณํ ์๊ณ ๋ฆฌ์ฆ์ ๊ฐ์๋ฅผ ์ ์ํ์์ผ๋ฉฐ, ์ค์ ๊ตํต ์ํฉ์์์ ์คํ ๊ฒฐ๊ณผ๋ ์ ์๋ ์๊ณ ๋ฆฌ์ฆ์ ํจ๊ณผ์ฑ๊ณผ ์ด์ ์ ๊ฑฐ๋๊ณผ์ ์ ์ฌ์ฑ์ ๋ณด์ฌ์ฃผ์๋ค. ์ค์ฐจ ์คํ ๊ฒฐ๊ณผ๋ ๋นํต์ ๊ต์ฐจ๋ก๋ฅผ ํฌํจํ ๋์ฌ ์๋๋ฆฌ์ค์์์ ๊ฐ๊ฑดํ ์ฑ๋ฅ์ ๋ณด์ฌ์ฃผ์๋ค.The foci of automotive researches have been expanding from passive safety systems to active safety systems with advances in sensing and processing technologies. Recently, the majority of automotive makers have already commercialized active safety systems, such as adaptive cruise control (ACC), lane keeping assistance (LKA), and autonomous emergency braking (AEB). Such advances have extended the research field beyond active safety systems to automated driving systems to achieve zero fatalities. Especially, automated driving on urban roads has become a key issue because urban roads possess numerous risk factors for traffic accidents, such as sidewalks, blind spots, on-street parking, motorcycles, and pedestrians, which cause higher accident rates and fatalities than motorways. Several projects have been conducted, and many others are still underway to evaluate the effects of automated driving in environmental, demographic, social, and economic aspects. For example, the European project AdaptIVe, develops various automated driving functions and defines specific evaluation methodologies. In addition, CityMobil2 successfully integrates driverless intelligent vehicles in nine other environments throughout Europe. In Japan, the Automated Driving System Research Project began on May 2014, which focuses on the development and verification of automated driving systems and next-generation urban transportation.
From a careful review of a considerable amount of literature, automated driving systems have been proven to increase the safety of traffic users, reduce traffic congestion, and improve driver convenience. Various methodologies have been employed to develop the core technology of automated vehicles on urban roads, such as perception, motion planning, and control. However, the current state-of-the-art automated driving algorithms focus on the development of each technology separately. Consequently, designing automated driving systems from an integrated perspective is not yet sufficiently considered.
Therefore, this dissertation focused on developing a fully autonomous driving algorithm in urban complex scenarios using LiDAR, vision, GPS, and a simple path map. The proposed autonomous driving algorithm covered the urban road scenarios with uncontrolled intersections based on vehicle motion prediction and model predictive control approach. Mainly, four research issues are considered: dynamic/static environment representation, and longitudinal/lateral motion planning.
In the remainder of this thesis, we will provide an overview of the proposed motion planning algorithm for urban autonomous driving and the experimental results in real traffic, which showed the effectiveness and human-like behaviors of the proposed algorithm. The proposed algorithm has been tested and evaluated using both simulation and vehicle tests. The test results show the robust performance of urban scenarios, including uncontrolled intersections.Chapter 1 Introduction 1
1.1. Background and Motivation 1
1.2. Previous Researches 4
1.3. Thesis Objectives 9
1.4. Thesis Outline 10
Chapter 2 Overview of Motion Planning for Automated Driving System 11
Chapter 3 Dynamic Environment Representation with Motion Prediction 15
3.1. Moving Object Classification 17
3.2. Vehicle State based Direct Motion Prediction 20
3.2.1. Data Collection Vehicle 22
3.2.2. Target Roads 23
3.2.3. Dataset Selection 24
3.2.4. Network Architecture 25
3.2.5. Input and Output Features 33
3.2.6. Encoder and Decoder 33
3.2.7. Sequence Length 34
3.3. Road Structure based Interactive Motion Prediction 36
3.3.1. Maneuver Definition 38
3.3.2. Network Architecture 39
3.3.3. Path Following Model based State Predictor 47
3.3.4. Estimation of predictor uncertainty 50
3.3.5. Motion Parameter Estimation 53
3.3.6. Interactive Maneuver Prediction 56
3.4. Intersection Approaching Vehicle Motion Prediction 59
3.4.1. Driver Behavior Model at Intersections 59
3.4.2. Intention Inference based State Prediction 63
Chapter 4 Static Environment Representation 67
4.1. Static Obstacle Map Construction 69
4.2. Free Space Boundary Decision 74
4.3. Drivable Corridor Decision 76
Chapter 5 Longitudinal Motion Planning 81
5.1. In-Lane Target Following 82
5.2. Proactive Motion Planning for Narrow Road Driving 85
5.2.1. Motivation for Collision Preventive Velocity Planning 85
5.2.2. Desired Acceleration Decision 86
5.3. Uncontrolled Intersection 90
5.3.1. Driving Phase and Mode Definition 91
5.3.2. State Machine for Driving Mode Decision 92
5.3.3. Motion Planner for Approach Mode 95
5.3.4. Motion Planner for Risk Management Phase 98
Chapter 6 Lateral Motion Planning 105
6.1. Vehicle Model 107
6.2. Cost Function and Constraints 109
Chapter 7 Performance Evaluation 115
7.1. Motion Prediction 115
7.1.1. Prediction Accuracy Analysis of Vehicle State based Direct Motion Predictor 115
7.1.2. Prediction Accuracy and Effect Analysis of Road Structure based Interactive Motion Predictor 122
7.2. Prediction based Distance Control at Urban Roads 132
7.2.1. Driving Data Analysis of Direct Motion Predictor Application at Urban Roads 133
7.2.2. Case Study of Vehicle Test at Urban Roads 138
7.2.3. Analysis of Vehicle Test Results on Urban Roads 147
7.3. Complex Urban Roads 153
7.3.1. Case Study of Vehicle Test at Complex Urban Roads 154
7.3.2. Closed-loop Simulation based Safety Analysis 162
7.4. Uncontrolled Intersections 164
7.4.1. Simulation based Algorithm Comparison of Motion Planner 164
7.4.2. Monte-Carlo Simulation based Safety Analysis 166
7.4.3. Vehicle Tests Results in Real Traffic Conditions 172
7.4.4. Similarity Analysis between Human and Automated Vehicle 194
7.5. Multi-Lane Turn Intersections 197
7.5.1. Case Study of a Multi-Lane Left Turn Scenario 197
7.5.2. Analysis of Motion Planning Application Results 203
Chapter 8 Conclusion & Future Works 207
8.1. Conclusion 207
8.2. Future Works 209
Bibliography 210
Abstract in Korean 219Docto
Performance Evaluation of Vision-Based Algorithms for MAVs
An important focus of current research in the field of Micro Aerial Vehicles
(MAVs) is to increase the safety of their operation in general unstructured
environments. Especially indoors, where GPS cannot be used for localization,
reliable algorithms for localization and mapping of the environment are
necessary in order to keep an MAV airborne safely. In this paper, we compare
vision-based real-time capable methods for localization and mapping and point
out their strengths and weaknesses. Additionally, we describe algorithms for
state estimation, control and navigation, which use the localization and
mapping results of our vision-based algorithms as input.Comment: Presented at OAGM Workshop, 2015 (arXiv:1505.01065
Perception-aware Path Planning
In this paper, we give a double twist to the problem of planning under
uncertainty. State-of-the-art planners seek to minimize the localization
uncertainty by only considering the geometric structure of the scene. In this
paper, we argue that motion planning for vision-controlled robots should be
perception aware in that the robot should also favor texture-rich areas to
minimize the localization uncertainty during a goal-reaching task. Thus, we
describe how to optimally incorporate the photometric information (i.e.,
texture) of the scene, in addition to the the geometric one, to compute the
uncertainty of vision-based localization during path planning. To avoid the
caveats of feature-based localization systems (i.e., dependence on feature type
and user-defined thresholds), we use dense, direct methods. This allows us to
compute the localization uncertainty directly from the intensity values of
every pixel in the image. We also describe how to compute trajectories online,
considering also scenarios with no prior knowledge about the map. The proposed
framework is general and can easily be adapted to different robotic platforms
and scenarios. The effectiveness of our approach is demonstrated with extensive
experiments in both simulated and real-world environments using a
vision-controlled micro aerial vehicle.Comment: 16 pages, 20 figures, revised version. Conditionally accepted for
IEEE Transactions on Robotic
From Monocular SLAM to Autonomous Drone Exploration
Micro aerial vehicles (MAVs) are strongly limited in their payload and power
capacity. In order to implement autonomous navigation, algorithms are therefore
desirable that use sensory equipment that is as small, low-weight, and
low-power consuming as possible. In this paper, we propose a method for
autonomous MAV navigation and exploration using a low-cost consumer-grade
quadrocopter equipped with a monocular camera. Our vision-based navigation
system builds on LSD-SLAM which estimates the MAV trajectory and a semi-dense
reconstruction of the environment in real-time. Since LSD-SLAM only determines
depth at high gradient pixels, texture-less areas are not directly observed so
that previous exploration methods that assume dense map information cannot
directly be applied. We propose an obstacle mapping and exploration approach
that takes the properties of our semi-dense monocular SLAM system into account.
In experiments, we demonstrate our vision-based autonomous navigation and
exploration system with a Parrot Bebop MAV
Performance evaluation of a distributed integrative architecture for robotics
The eld of robotics employs a vast amount of coupled sub-systems. These need to interact
cooperatively and concurrently in order to yield the desired results. Some hybrid algorithms
also require intensive cooperative interactions internally. The architecture proposed lends it-
self amenable to problem domains that require rigorous calculations that are usually impeded
by the capacity of a single machine, and incompatibility issues between software computing
elements. Implementations are abstracted away from the physical hardware for ease of de-
velopment and competition in simulation leagues. Monolithic developments are complex, and
the desire for decoupled architectures arises. Decoupling also lowers the threshold for using
distributed and parallel resources. The ability to re-use and re-combine components on de-
mand, therefore is essential, while maintaining the necessary degree of interaction. For this
reason we propose to build software components on top of a Service Oriented Architecture
(SOA) using Web Services. An additional bene t is platform independence regarding both
the operating system and the implementation language. The robot soccer platform as well
as the associated simulation leagues are the target domain for the development. Furthermore
are machine vision and remote process control related portions of the architecture currently
in development and testing for industrial environments. We provide numerical data based on
the Python frameworks ZSI and SOAPpy undermining the suitability of this approach for the
eld of robotics. Response times of signi cantly less than 50 ms even for fully interpreted,
dynamic languages provides hard information showing the feasibility of Web Services based
SOAs even in time critical robotic applications
Fast, Autonomous Flight in GPS-Denied and Cluttered Environments
One of the most challenging tasks for a flying robot is to autonomously
navigate between target locations quickly and reliably while avoiding obstacles
in its path, and with little to no a-priori knowledge of the operating
environment. This challenge is addressed in the present paper. We describe the
system design and software architecture of our proposed solution, and showcase
how all the distinct components can be integrated to enable smooth robot
operation. We provide critical insight on hardware and software component
selection and development, and present results from extensive experimental
testing in real-world warehouse environments. Experimental testing reveals that
our proposed solution can deliver fast and robust aerial robot autonomous
navigation in cluttered, GPS-denied environments.Comment: Pre-peer reviewed version of the article accepted in Journal of Field
Robotic
Autonomy Infused Teleoperation with Application to BCI Manipulation
Robot teleoperation systems face a common set of challenges including
latency, low-dimensional user commands, and asymmetric control inputs. User
control with Brain-Computer Interfaces (BCIs) exacerbates these problems
through especially noisy and erratic low-dimensional motion commands due to the
difficulty in decoding neural activity. We introduce a general framework to
address these challenges through a combination of computer vision, user intent
inference, and arbitration between the human input and autonomous control
schemes. Adjustable levels of assistance allow the system to balance the
operator's capabilities and feelings of comfort and control while compensating
for a task's difficulty. We present experimental results demonstrating
significant performance improvement using the shared-control assistance
framework on adapted rehabilitation benchmarks with two subjects implanted with
intracortical brain-computer interfaces controlling a seven degree-of-freedom
robotic manipulator as a prosthetic. Our results further indicate that shared
assistance mitigates perceived user difficulty and even enables successful
performance on previously infeasible tasks. We showcase the extensibility of
our architecture with applications to quality-of-life tasks such as opening a
door, pouring liquids from containers, and manipulation with novel objects in
densely cluttered environments
- โฆ