30 research outputs found

    Advances in vision-based lane detection: algorithms, integration, assessment, and perspectives on ACP-based parallel vision

    Get PDF
    Lane detection is a fundamental aspect of most current advanced driver assistance systems (ADASs). A large number of existing results focus on the study of vision-based lane detection methods due to the extensive knowledge background and the low-cost of camera devices. In this paper, previous vision-based lane detection studies are reviewed in terms of three aspects, which are lane detection algorithms, integration, and evaluation methods. Next, considering the inevitable limitations that exist in the camera-based lane detection system, the system integration methodologies for constructing more robust detection systems are reviewed and analyzed. The integration methods are further divided into three levels, namely, algorithm, system, and sensor. Algorithm level combines different lane detection algorithms while system level integrates other object detection systems to comprehensively detect lane positions. Sensor level uses multi-modal sensors to build a robust lane recognition system. In view of the complexity of evaluating the detection system, and the lack of common evaluation procedure and uniform metrics in past studies, the existing evaluation methods and metrics are analyzed and classified to propose a better evaluation of the lane detection system. Next, a comparison of representative studies is performed. Finally, a discussion on the limitations of current lane detection systems and the future developing trends toward an Artificial Society, Computational experiment-based parallel lane detection framework is proposed

    ์ž์œจ ์ฃผํ–‰ ์‹œ์Šคํ…œ์˜ ์ฐจ๋Ÿ‰ ์•ˆ์ „์„ ์œ„ํ•œ ์ ์‘ํ˜• ๊ด€์‹ฌ ์˜์—ญ ๊ธฐ๋ฐ˜ ํšจ์œจ์  ํ™˜๊ฒฝ ์ธ์ง€

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ)--์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› :๊ณต๊ณผ๋Œ€ํ•™ ๊ธฐ๊ณ„ํ•ญ๊ณต๊ณตํ•™๋ถ€,2020. 2. ์ด๊ฒฝ์ˆ˜.์ „ ์„ธ๊ณ„์ ์œผ๋กœ ์ž๋™์ฐจ ์‚ฌ๊ณ ๋กœ 120 ๋งŒ ๋ช…์ด ์‚ฌ๋งํ•˜๊ธฐ ๋•Œ๋ฌธ์— ๊ตํ†ต ์‚ฌ๊ณ ์— ๋Œ€ํ•œ ๊ธฐ๋ณธ์ ์ธ ์˜ˆ๋ฐฉ ์กฐ์น˜์— ๋Œ€ํ•œ ๋…ผ์˜๊ฐ€ ์ง„ํ–‰๋˜๊ณ  ์žˆ๋‹ค. ํ†ต๊ณ„ ์ž๋ฃŒ์— ๋”ฐ๋ฅด๋ฉด ๊ตํ†ต ์‚ฌ๊ณ ์˜ 94 %๊ฐ€ ์ธ์  ์˜ค๋ฅ˜์— ๊ธฐ์ธํ•œ๋‹ค. ๋„๋กœ ์•ˆ์ „ ํ™•๋ณด์˜ ๊ด€์ ์—์„œ ์ž์œจ ์ฃผํ–‰ ๊ธฐ์ˆ ์€ ์ด๋Ÿฌํ•œ ์‹ฌ๊ฐํ•œ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๋Š” ๋ฐฉ๋ฒ•์œผ๋กœ์จ ๊ด€์‹ฌ์ด ๋†’์•„์กŒ์œผ๋ฉฐ, ์—ฐ๊ตฌ ๊ฐœ๋ฐœ์„ ํ†ตํ•ด ๋‹จ๊ณ„์  ์ƒ์šฉํ™”๊ฐ€ ์ด๋ฃจ์–ด์ง€๊ณ  ์žˆ๋‹ค. ์ฃผ์š” ์ž๋™์ฐจ ์ œ์กฐ์—…์ฒด๋Š” ์ด๋ฏธ ์ฐจ์„  ์œ ์ง€ ๋ณด์กฐ์žฅ์น˜ (LKAS: Lane Keeping Assistant System), ์ ์‘ํ˜• ์ˆœํ•ญ ์ œ์–ด ์‹œ์Šคํ…œ(ACC: Adaptive Cruise Control), ์ฃผ์ฐจ ๋ณด์กฐ ์‹œ์Šคํ…œ (PAS: Parking Assistance System), ์ž๋™ ๊ธด๊ธ‰ ์ œ๋™์žฅ์น˜ (AEB: Automated Emergency Braking) ๋“ฑ์˜ ์ฒจ๋‹จ ์šด์ „์ž ๋ณด์กฐ ์‹œ์Šคํ…œ (ADAS)์„ ๊ฐœ๋ฐœํ•˜๊ณ  ์ƒ์šฉํ™”ํ•˜์˜€๋‹ค. ๋˜ํ•œ Audi์˜ Audi AI Traffic Jam Pilot, Tesla์˜ Autopilot, Mercedes-Benz์˜ Distronic Plus, ํ˜„๋Œ€์ž๋™์ฐจ์˜ Highway Driving Assist ๋ฐ BMW์˜ Driving Assistant Plus ์™€ ๊ฐ™์€ ๋ถ€๋ถ„ ์ž์œจ ์ฃผํ–‰ ์‹œ์Šคํ…œ์ด ์ถœ์‹œ๋˜๊ณ  ์žˆ๋‹ค. ์ด๋Ÿฌํ•œ ๋ถ€๋ถ„ ์ž์œจ ์ฃผํ–‰ ์‹œ์Šคํ…œ์€ ์—ฌ์ „ํžˆ ์šด์ „์ž์˜ ์ฃผ์˜๊ฐ€ ์ˆ˜๋ฐ˜๋˜์–ด์•ผ ํ•จ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ , ์•ˆ์ „์„ฑ์„ ํฌ๊ฒŒ ํ–ฅ์ƒ์‹œํ‚ค๋Š” ๋ฐ ํšจ๊ณผ์ ์ด๊ธฐ ๋•Œ๋ฌธ์— ์ง€์†์ ์œผ๋กœ ๊ทธ ์ˆ˜์š”๊ฐ€ ์ฆ๊ฐ€ํ•˜๊ณ  ์žˆ๋‹ค. ์ตœ๊ทผ ๋ช‡ ๋…„๊ฐ„ ๋งŽ์€ ์ˆ˜์˜ ์ž์œจ์ฃผํ–‰ ์‚ฌ๊ณ ๊ฐ€ ๋ฐœ์ƒํ•˜์˜€์œผ๋ฉฐ, ๊ทธ ๋นˆ๋„์ˆ˜๊ฐ€ ๋น ๋ฅด๊ฒŒ ์ฆ๊ฐ€ํ•˜์—ฌ ์‚ฌํšŒ์ ์œผ๋กœ ์ฃผ๋ชฉ๋ฐ›๊ณ  ์žˆ๋‹ค. ์ฐจ๋Ÿ‰ ์‚ฌ๊ณ ๋Š” ์ธ๋ช… ์‚ฌ๊ณ ์™€ ์ง์ ‘์ ์œผ๋กœ ์—ฐ๊ด€๋˜๊ธฐ ๋•Œ๋ฌธ์— ์ž์œจ ์ฃผํ–‰ ์ฐจ๋Ÿ‰์˜ ์‚ฌ๊ณ ๋“ค์€ ์ž์œจ ์ฃผํ–‰ ๊ธฐ์ˆ  ์‹ ๋ขฐ์„ฑ์˜ ์ €ํ•˜๋ฅผ ์•ผ๊ธฐํ•˜์—ฌ ์‚ฌํšŒ์ ์ธ ๋ถˆ์•ˆ๊ฐ์„ ํ‚ค์šด๋‹ค. ์ตœ๊ทผ ์ž์œจ ์ฃผํ–‰ ๊ด€๋ จ ์‚ฌ๊ณ ๋“ค๋กœ ์ธํ•ด, ์ž์œจ์ฃผํ–‰ ์ฐจ๋Ÿ‰์˜ ์•ˆ์ „์„ฑ์˜ ๋ณด์žฅ์ด ๋”์šฑ ๊ฐ•์กฐ๋˜๊ณ  ์žˆ๋‹ค. ๋”ฐ๋ผ์„œ ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ์ž์œจ ์ฃผํ–‰ ์ฐจ๋Ÿ‰์˜ ๊ฑฐ๋™ ์ œ์–ด๋ฅผ ๊ณ ๋ คํ•˜์—ฌ ์ž์œจ ์ฃผํ–‰ ์‹œ์Šคํ…œ ๊ด€์ ์—์„œ ์ฐจ๋Ÿ‰์˜ ์•ˆ์ „์„ฑ์„ ์šฐ์„ ์ ์œผ๋กœ ํ™•๋ณดํ•˜๋Š” ์ ‘๊ทผ ๋ฐฉ์‹์„ ์ œ์•ˆํ•œ๋‹ค. ๋˜ํ•œ ์ž์œจ์ฃผํ–‰ ๊ธฐ์ˆ  ๊ฐœ๋ฐœ์€ ๋‹จ์ˆœํ•˜๊ฒŒ ์šด์ „์„ ๋Œ€์ฒดํ•˜๋Š” ๊ธฐ์ˆ ์ด ์•„๋‹ˆ๋ผ, ์ฒจ๋‹จ๊ธฐ์ˆ ์˜ ์ง‘์•ฝ ์ฒด๋กœ์จ ์‚ฐ์—…์ ์œผ๋กœ ๋งค์šฐ ํฐ ํŒŒ๊ธ‰๋ ฅ์„ ๊ฐ€์ง„๋‹ค๊ณ  ์ „๋ง๋œ๋‹ค. ํ˜„์žฌ ์ž์œจ์ฃผํ–‰ ์‹œ์Šคํ…œ์€ ๊ธฐ์กด ์ž๋™์ฐจ ์‚ฐ์—…์˜ ๊ณ ์ „์ ์ธ ํ‹€์—์„œ ํ™•์žฅ๋˜์–ด, ๋‹ค์–‘ํ•œ ๋ถ„์•ผ์˜ ๊ด€์ ์—์„œ ์ฃผ๋„์ ์œผ๋กœ ๊ฐœ๋ฐœ์ด ์ง„ํ–‰๋˜๊ณ  ์žˆ๋‹ค. ์ž์œจ ์ฃผํ–‰์€ ๋‹ค์–‘ํ•œ ๊ธฐ์ˆ ์˜ ๋ณตํ•ฉ์ ์ธ ๊ฒฐํ•ฉ์œผ๋กœ ๊ตฌ์„ฑ๋˜๊ธฐ ๋•Œ๋ฌธ์—, ํ˜„์žฌ ๊ฐ๊ธฐ ๋‹ค๋ฅธ ๋‹ค์–‘ํ•œ ํ™˜๊ฒฝ์—์„œ ๊ฐœ๋ฐœ์ด ์ง„ํ–‰ ์ค‘์ด๋ฉฐ, ์•„์ง ํ‘œ์ค€ํ™”๋˜์–ด ์žˆ์ง€ ์•Š์€ ์‹ค์ •์ด๋‹ค. ๋Œ€๋ถ€๋ถ„ ๊ฐ ๋ชจ๋“ˆ ๋‹จ์œ„์˜ ์ง€์—ฝ์ ์ธ ์„ฑ๋Šฅํ–ฅ์ƒ์„ ์ถ”๊ตฌํ•˜๋Š” ๊ฒฝํ–ฅ์ด ์žˆ์œผ๋ฉฐ, ๊ตฌ์„ฑ ๋ชจ๋“ˆ ๊ฐ„ ๊ด€๊ณ„๊ฐ€ ๊ณ ๋ ค๋œ ์ „์ฒด ์‹œ์Šคํ…œ ๋‹จ์œ„์˜ ์ ‘๊ทผ๋ฐฉ์‹์€ ๋ฏธํกํ•œ ์‹ค์ •์ด๋‹ค. ์„ธ๋ถ€ ๋ชจ๋“ˆ ๋‹จ์œ„์˜ ์ง€์—ฝ์ ์ธ ์—ฐ๊ตฌ ๊ฐœ๋ฐœ์€ ํ†ตํ•ฉ ์‹œ, ๋ชจ๋“ˆ ๊ฐ„ ์ƒํ˜ธ์ž‘์šฉ์œผ๋กœ ์ธํ•œ ์˜ํ–ฅ์œผ๋กœ ์‹œ์Šคํ…œ ๊ด€์ ์—์„œ ์ ์ ˆํ•œ ์„ฑ๋Šฅ์„ ํ™•๋ณดํ•˜๊ธฐ ์–ด๋ ค์šธ ์ˆ˜ ์žˆ๋‹ค. ๊ฐ ๋ชจ๋“ˆ์˜ ์„ฑ๋Šฅ๋งŒ์„ ๊ณ ๋ คํ•œ ์ผ๋ฐฉ์ ์ธ ๋ฐฉํ–ฅ์˜ ์—ฐ๊ตฌ๋Š” ํ•œ๊ณ„๊ฐ€ ๋ช…ํ™•ํ•˜๋ฉฐ, ์—ฐ๊ด€๋œ ๋ชจ๋“ˆ๋“ค์˜ ํŠน์„ฑ์„ ๊ณ ๋ คํ•˜์—ฌ ๋ฐ˜์˜ํ•  ํ•„์š”๊ฐ€ ์žˆ๋‹ค. ๋”ฐ๋ผ์„œ ์ž์œจ์ฃผํ–‰ ์ „์ฒด ์‹œ์Šคํ…œ์˜ ๊ด€์ ์—์„œ, ์ฐจ๋Ÿ‰ ์•ˆ์ „์„ ์šฐ์„ ์ ์œผ๋กœ ํ™•๋ณดํ•˜๊ณ  ์ „์ฒด ์„ฑ๋Šฅ์„ ๊ทน๋Œ€ํ™”ํ•˜๋Š” ํšจ๊ณผ์ ์ธ ์ ‘๊ทผ ๋ฐฉ์‹์„ ๋ณธ ์—ฐ๊ตฌ์—์„œ ์ œ์•ˆํ•˜๊ณ ์ž ํ•œ๋‹ค. ๋ณธ ์—ฐ๊ตฌ๋Š” ์ž์œจ ์ฃผํ–‰ ์‹œ์Šคํ…œ์˜ ์•ˆ์ •์ ์ด๊ณ  ๋†’์€ ์„ฑ๋Šฅ์„ ํ™•๋ณดํ•˜๊ธฐ ์œ„ํ•ด ์ „์ฒด ์‹œ์Šคํ…œ ์ž‘๋™ ์ธก๋ฉด์—์„œ ๊ตฌ์„ฑ๋œ ๋ชจ๋“ˆ ๊ฐ„์˜ ์ƒํ˜ธ ์ž‘์šฉ์„ ๊ณ ๋ คํ•˜์—ฌ ํšจ์œจ์ ์ธ ํ™˜๊ฒฝ ์ธ์‹ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ๊ฐœ๋ฐœํ•˜๋Š”๋ฐ ์ค‘์ ์„ ๋‘”๋‹ค. ์‹ค์งˆ์ ์ธ ๊ด€์ ์—์„œ ํšจ๊ณผ์ ์ธ ์ •๋ณด ์ฒ˜๋ฆฌ๋ฅผ ์ˆ˜ํ–‰ํ•˜๊ณ  ์ฐจ๋Ÿ‰ ์•ˆ์ „์„ ํ™•๋ณดํ•˜๊ธฐ ์œ„ํ•ด ์ ์‘ํ˜• ๊ด€์‹ฌ ์˜์—ญ (ROI) ๊ธฐ๋ฐ˜ ๊ณ„์‚ฐ ๋ถ€ํ•˜ ๊ด€๋ฆฌ ์ „๋žต์„ ์ œ์•ˆํ•œ๋‹ค. ์ฐจ๋Ÿ‰์˜ ๊ฑฐ๋™ ํŠน์„ฑ, ๋„๋กœ ์„ค๊ณ„ ํ‘œ์ค€, ์ถ”์›” ๋ฐ ์ฐจ์„  ๋ณ€๊ฒฝ๊ณผ ๊ฐ™์€ ์ฃผ๋ณ€ ์ฐจ๋Ÿ‰์˜ ์ฃผํ–‰ ํŠน์„ฑ์ด ์ ์‘ํ˜• ROI ์„ค๊ณ„ ๋ฐ ์ฃผํ–‰ ์ƒํ™ฉ์— ๋”ฐ๋ฅธ ์˜์—ญ ํ™•์žฅ์— ๋ฐ˜์˜๋œ๋‹ค. ๋˜ํ•œ, ์ž์œจ ์ฃผํ–‰ ์ฐจ๋Ÿ‰์˜ ์‹ค์งˆ์ ์ธ ์•ˆ์ „์„ ๋ณด์žฅํ•˜๊ธฐ ์œ„ํ•ด ROI ์„ค๊ณ„์—์„œ ์ž์œจ ์ฃผํ–‰ ์ œ์–ด๋ฅผ ์œ„ํ•œ ๊ฑฐ๋™ ๊ณ„ํš ๊ฒฐ๊ณผ๊ฐ€ ๊ณ ๋ ค๋œ๋‹ค. ๋ณด๋‹ค ๋„“์€ ์ฃผ๋ณ€ ์˜์—ญ์— ๋Œ€ํ•œ ํ™˜๊ฒฝ ์ •๋ณด๋ฅผ ํ™•๋ณดํ•˜๊ธฐ ์œ„ํ•ด ๋ผ์ด๋‹ค ๋ฐ์ดํ„ฐ๋Š” ์„ค๊ณ„๋œ ROI๋ณ„๋กœ ๋ถ„๋ฅ˜๋˜๋ฉฐ, ์˜์—ญ๋ณ„ ์ค‘์š”๋„์— ๋”ฐ๋ผ ์—ฐ์‚ฐ ๊ณผ์ •์ด ๋ถ„๋ฆฌ๋˜์–ด ์ˆ˜ํ–‰๋œ๋‹ค. ๋ชฉํ‘œ ์‹œ์Šคํ…œ์„ ๊ตฌ์„ฑํ•˜๋Š” ๋ชจ๋“ˆ ๋ณ„ ์—ฐ์‚ฐ ์‹œ๊ฐ„์ด ์ธก์ •๋œ ๋ฐ์ดํ„ฐ ๊ธฐ๋ฐ˜์œผ๋กœ ํ†ต๊ณ„์ ์œผ๋กœ ๋ถ„์„๋œ๋‹ค. ์šด์ „์ž์˜ ๋ฐ˜์‘ ์‹œ๊ฐ„, ์‚ฐ์—… ํ‘œ์ค€, ๋Œ€์ƒ ํ•˜๋“œ์›จ์–ด ์‚ฌ์–‘ ๋ฐ ์„ผ์„œ ์„ฑ๋Šฅ์„ ๊ธฐ๋ฐ˜์œผ๋กœ ๊ฒฐ์ •๋œ ์‹œ์Šคํ…œ ์„ฑ๋Šฅ ์กฐ๊ฑด์„ ๊ณ ๋ คํ•˜์—ฌ, ์•ˆ์ „์„ฑ์„ ํ™•๋ณดํ•˜๊ธฐ ์œ„ํ•œ ์ž์œจ ์ฃผํ–‰ ์‹œ์Šคํ…œ์˜ ์ ์ ˆํ•œ ์ƒ˜ํ”Œ๋ง ์ฃผ๊ธฐ๊ฐ€ ์ •์˜๋œ๋‹ค. ๋ฐ์ดํ„ฐ ๊ธฐ๋ฐ˜ ๋‹ค์ค‘ ์„ ํ˜• ํšŒ๊ท€ ๋ถ„์„์€ ์ธ์‹ ๋ชจ๋“ˆ์„ ๊ตฌ์„ฑํ•˜๋Š” ํ•จ์ˆ˜ ๋ณ„ ์‹คํ–‰ ์‹œ๊ฐ„์„ ์˜ˆ์ธกํ•˜๊ธฐ ์œ„ํ•ด ์ ์šฉ๋˜๋ฉฐ, ์•ˆ์ •์ ์ธ ์‹ค์‹œ๊ฐ„ ์„ฑ๋Šฅ์„ ๋ณด์žฅํ•˜๊ธฐ ์œ„ํ•ด ์ ์‘ํ˜• ROI๋ฅผ ๊ธฐ๋ฐ˜์œผ๋กœ ์ž์œจ ์ฃผํ–‰ ์•ˆ์ „์— ํ•„์š”ํ•œ ๋ฐ์ดํ„ฐ๋ฅผ ์„ ํƒ์ ์œผ๋กœ ๋ถ„๋ฅ˜ํ•˜์—ฌ ์—ฐ์‚ฐ ๋ถ€ํ•˜๊ฐ€ ๊ฐ์ถ•๋œ๋‹ค. ์—ฐ์‚ฐ ๋ถ€ํ•˜ ํ‰๊ฐ€ ๊ด€๋ฆฌ์—์„œ ํ™˜๊ฒฝ ์ธ์ง€ ๋ชจ๋“ˆ๊ณผ ์ „์ฒด ์‹œ์Šคํ…œ์˜ ์—ฐ์‚ฐ ๋ถ€ํ•˜๊ฐ€ ๋Œ€์ƒ ํ™˜๊ฒฝ์—์„œ์˜ ์ ์ ˆ์„ฑ์„ ํ‰๊ฐ€ํ•˜๊ณ , ์—ฐ์‚ฐ ๋ถ€ํ•˜ ๊ด€๋ฆฌ์— ๋ฌธ์ œ๊ฐ€ ์žˆ์„ ๋•Œ ์ž์œจ ์ฃผํ–‰ ์ฐจ๋Ÿ‰์˜ ๊ฑฐ๋™์„ ์ œํ•œํ•˜์—ฌ ์‹œ์Šคํ…œ ์•ˆ์ •์„ฑ์„ ์œ ์ง€ํ•จ์œผ๋กœ์จ ์ฐจ๋Ÿ‰ ์•ˆ์ „์„ฑ์„ ํ™•๋ณดํ•œ๋‹ค. ์ œ์•ˆ๋œ ์ž์œจ์ฃผํ–‰ ์ธ์ง€ ์ „๋žต ๋ฐ ์•Œ๊ณ ๋ฆฌ์ฆ˜์˜ ์„ฑ๋Šฅ์€ ๋ฐ์ดํ„ฐ ๊ธฐ๋ฐ˜ ์‹œ๋ฎฌ๋ ˆ์ด์…˜ ๋ฐ ์‹ค์ฐจ ์‹คํ—˜์„ ํ†ตํ•ด ๊ฒ€์ฆ๋˜์—ˆ๋‹ค. ์‹คํ—˜ ๊ฒฐ๊ณผ๋ฅผ ํ†ตํ•ด ์ œ์•ˆ๋œ ํ™˜๊ฒฝ ์ธ์‹ ์•Œ๊ณ ๋ฆฌ์ฆ˜์€ ์ž์œจ ์ฃผํ–‰ ์‹œ์Šคํ…œ์„ ๊ตฌ์„ฑํ•˜๋Š” ๋ชจ๋“ˆ ๊ฐ„์˜ ์ƒํ˜ธ ์ž‘์šฉ์„ ๊ณ ๋ คํ•˜์—ฌ ๋„์‹ฌ ๋„๋กœ ํ™˜๊ฒฝ์—์„œ ์ž์œจ ์ฃผํ–‰ ์ฐจ๋Ÿ‰์˜ ์•ˆ์ „์„ฑ๊ณผ ์‹œ์Šคํ…œ์˜ ์•ˆ์ •์ ์ธ ์„ฑ๋Šฅ์„ ๋ณด์žฅํ•  ์ˆ˜ ์žˆ์Œ์„ ํ™•์ธํ•˜์˜€๋‹ค.Since annually 1.2 million people die from car crashes worldwide, discussions about fundamental preventive measures for traffic accidents are taking place. According to the statistical survey, 94 percent of all traffic accidents are caused by human error. From the perspective of securing road safety, automated driving technology became interesting as a way to solve this serious problem, and its commercialization was considered through a step-by-step application through research and development. Major carmakers already have developed and commercialized advanced driver assistance systems (ADAS), such as lane keeping assistance system (LKAS), adaptive cruise control (ACC), parking assistance system (PAS), automated emergency braking (AEB), and so on. Furthermore, partially automated driving systems are being installed in vehicles and released by carmakers. Audi AI Traffic Jam Pilot (Audi), Autopilot (Tesla), Distronic Plus (Mercedes-Benz), Highway Driving Assist (Hyundai Motor Company), and Driving Assistant Plus (BMW) are typical released examples of the partially automated driving system. These released partially automated driving systems are still must be accompanied by driver attention. Nevertheless, it is proving to be effective in significantly improving safety. In recent years, several automated driving accidents have occurred, and the frequency is rapidly increasing and attracting social attention. Since vehicle accidents are directly related to human casualty, accidents of automated vehicles cause social insecurity by causing a decrease in the reliability of automated driving technology. Due to recent automated driving-related accidents, the safety of the automated vehicle has been emphasized more. Therefore, in this study, we propose an approach to secure vehicle safety in terms of the entire system in consideration of the behavior control of the automated driving vehicle. In addition, the development of automated driving is not merely a replacement technology for driving, but it is expected to have an industrial assembly as integration of high technology. Currently, automated driving systems have been extended from the conventional framework of the existing automotive industry, and are being developed in various fields. Since automated driving is composed of a complex combination of various technologies, development is currently underway in various conditions and has not been standardized yet. Most developments tend to pursue local performance improvement in each module unit, and the overall system unit approaches considering the relationship between component modules is insufficient. Local research and development at the submodule level can be challenging to achieve adequate performance from a system-level due to the effects of module interaction in terms of system integration perspective. The one-way approach that considers only the performance of each module has its limitations. To overcome this problem, it is necessary to consider the characteristics of the modules involved. This dissertation focuses on developing an efficient environment perception algorithm by considering the interaction between configured modules in terms of entire system operation to secure the stable and high performance of an automated driving system. In order to perform effective information processing and secure vehicle safety from a practical perspective, we propose an adaptive ROI based computational load management strategy. The motion characteristics of the subject vehicle, road design standards, and driving tasks of the surrounding vehicles, such as overtaking, and lane change, are reflected in the design of adaptive ROI, and the expansion of the area according to the driving task is considered. Additionally, motion planning results for automated driving are considered in the ROI design in order to guarantee the practical safety of the automated vehicle. In order to secure reasonable and appropriate environment information for the wider areas, lidar sensor data is classified by the designed ROI, and separated processing is conducted according to area importance. Based on the driving data, the calculation time of each module constituting the target system is statistically analyzed. In consideration of the system performance constraint determined by using human reaction time and industry standards, target hardware specification and the performance of sensor, the appropriate sampling time for automated driving system is defined to enhance safety. The data-based multiple linear regression is applied to predict the computation time by each function constituting perception module, and the computational load reduction is applied sequentially by selecting the data essential for automated driving safety based on adaptive ROI to secure the stable real-time execution performance of the system. In computational load assessment, it evaluates whether the computational load of the environmental perception module and entire system are appropriate and restricts the vehicle behavior when there is a problem in the computational load management to ensure vehicle safety by maintaining system stability. The performance of the proposed strategy and algorithms is evaluated through driving data-based simulation and actual vehicle tests. Test results show that the proposed environment recognition algorithm, which considers the interactions between the modules that make up the automated driving system, guarantees the safety of automated vehicle and reliable performance of system in an urban environment scenario.Chapter 1 Introduction 1 1.1. Background and Motivation 1 1.2. Previous Researches 6 1.3. Thesis Objectives 11 1.4. Thesis Outline 13 Chapter 2 Overall Architecture 14 2.1. Automated Driving Architecture 14 2.2. Test Vehicle Configuration 19 Chapter 3 Design of Adaptive ROI and Processing 21 3.1. ROI Definition 25 3.1.1. ROI Design for Normal Driving Condition 30 3.1.2. ROI Design for Lane Change 50 3.1.3. ROI Design for Intersection 56 3.2. Data Processing based on Adaptive ROI 62 3.2.1. Point Cloud Categorization by Adaptive ROI 63 3.2.2. Separated Voxelization 66 3.2.3. Separated Clustering 70 Chapter 4 Environment Perception Algorithm for Automated Driving 75 4.1. Time Delay Compensation of Environment Sensor 77 4.1.1. Algorithm Structure of Time Delay Estimation and Compensation 78 4.1.2. Time Delay Compensation Algorithm 79 4.1.3. Analysis of Processing Delay 84 4.1.4. Test Data based Open-loop Simulation 91 4.2. Environment Representation 96 4.2.1. Static Obstacle Map Construction 98 4.2.2. Lane and Road Boundary Detection 100 4.3. Multiple Object State Estimation and Tracking based on Geometric Model-Free Approach 107 4.3.1. Prediction of Geometric Model-Free Approach 109 4.3.2. Track Management 111 4.3.3. Measurement Update 112 4.3.4. Performance Evaluation via vehicle test 114 Chapter 5 Computational Load Management 117 5.1. Processing Time Analysis of Driving Data 121 5.2. Processing Time Estimation based on Multiple Linear Regression 128 5.2.1. Clustering Processing Time Estimation 129 5.2.2. Multi Object Tracking (MOT) Processing Time Estimation 138 5.2.3. Validation through Data-based Simulation 146 5.3. Computational Load Management 149 5.3.1. Sequential Processing to Computation Load Reduction 151 5.3.2. Restriction of Driving Control 154 5.3.3. Validation through Data-based Simulation 159 Chapter 6 Vehicle Tests based Performance Evaluation 163 6.1. Test-data based Simulation 164 6.2. Vehicle Tests: Urban Automated Driving 171 6.2.1. Test Configuration 171 6.2.2. Motion Planning and Vehicle Control 172 6.2.3. Vehicle Tests Results 174 Chapter 7 Conclusions and Future Works 184 Bibliography 188 Abstract in Korean 200Docto

    A Review of Sensor Technologies for Perception in Automated Driving

    Get PDF
    After more than 20 years of research, ADAS are common in modern vehicles available in the market. Automated Driving systems, still in research phase and limited in their capabilities, are starting early commercial tests in public roads. These systems rely on the information provided by on-board sensors, which allow to describe the state of the vehicle, its environment and other actors. Selection and arrangement of sensors represent a key factor in the design of the system. This survey reviews existing, novel and upcoming sensor technologies, applied to common perception tasks for ADAS and Automated Driving. They are put in context making a historical review of the most relevant demonstrations on Automated Driving, focused on their sensing setup. Finally, the article presents a snapshot of the future challenges for sensing technologies and perception, finishing with an overview of the commercial initiatives and manufacturers alliances that will show future market trends in sensors technologies for Automated Vehicles.This work has been partly supported by ECSEL Project ENABLE- S3 (with grant agreement number 692455-2), by the Spanish Government through CICYT projects (TRA2015- 63708-R and TRA2016-78886-C3-1-R)

    Simultaneous Localization and Mapping (SLAM) for Autonomous Driving: Concept and Analysis

    Get PDF
    The Simultaneous Localization and Mapping (SLAM) technique has achieved astonishing progress over the last few decades and has generated considerable interest in the autonomous driving community. With its conceptual roots in navigation and mapping, SLAM outperforms some traditional positioning and localization techniques since it can support more reliable and robust localization, planning, and controlling to meet some key criteria for autonomous driving. In this study the authors first give an overview of the different SLAM implementation approaches and then discuss the applications of SLAM for autonomous driving with respect to different driving scenarios, vehicle system components and the characteristics of the SLAM approaches. The authors then discuss some challenging issues and current solutions when applying SLAM for autonomous driving. Some quantitative quality analysis means to evaluate the characteristics and performance of SLAM systems and to monitor the risk in SLAM estimation are reviewed. In addition, this study describes a real-world road test to demonstrate a multi-sensor-based modernized SLAM procedure for autonomous driving. The numerical results show that a high-precision 3D point cloud map can be generated by the SLAM procedure with the integration of Lidar and GNSS/INS. Online fourโ€“five cm accuracy localization solution can be achieved based on this pre-generated map and online Lidar scan matching with a tightly fused inertial system

    A dynamic two-dimensional (D2D) weight-based map-matching algorithm

    Get PDF
    Existing map-Matching (MM) algorithms primarily localize positioning fixes along the centerline of a road and have largely ignored road width as an input. Consequently, vehicle lane-level localization, which is essential for stringent Intelligent Transport System (ITS) applications, seems difficult to accomplish, especially with the positioning data from low-cost GPS sensors. This paper aims to address this limitation by developing a new dynamic two-dimensional (D2D) weight-based MM algorithm incorporating dynamic weight coefficients and road width. To enable vehicle lane-level localization, a road segment is virtually expressed as a matrix of homogeneous grids with reference to a road centerline. These grids are then used to map-match positioning fixes as opposed to matching on a road centerline as carried out in traditional MM algorithms. In this developed algorithm, vehicle location identification on a road segment is based on the total weight score which is a function of four different weights: (i) proximity, (ii) kinematic, (iii) turn-intent prediction, and (iv) connectivity. Different parameters representing network complexity and positioning quality are used to assign the relative importance to different weight scores by employing an adaptive regression method. To demonstrate the transferability of the developed algorithm, it was tested by using 5,830 GPS positioning points collected in Nottingham, UK and 7,414 GPS positioning points collected in Mumbai and Pune, India. The developed algorithm, using stand-alone GPS position fixes, identifies the correct links 96.1% (for the Nottingham data) and 98.4% (for the Mumbai-Pune data) of the time. In terms of the correct lane identification, the algorithm was found to provide the accurate matching for 84% (Nottingham) and 79% (Mumbai-Pune) of the fixes obtained by stand-alone GPS. Using the same methodology adopted in this study, the accuracy of the lane identification could further be enhanced if the localization data from additional sensors (e.g. gyroscope) are utilized. ITS industry and vehicle manufacturers can implement this D2D map-matching algorithm for liability critical and in-vehicle information systems and services such as advanced driver assistant systems (ADAS)

    Simulations for Training Machine Learning Models for Autonomous Vehicles

    Get PDF
    Masinรตppe mudelite treenimine autonoomsete sรตidukite jaoks nรตuab palju andmeid, mille kรคsitsi mรคrgendamine on aeganรตudev. Simulatsioonid aitavad seda protsessi automatiseerida. Kรคesolev tรถรถ koostab รผlevaate 12-st internetiotsingu abil leitud simulatsioonist ja analรผรผsib neid lรคhtuvalt nende sobivusest maastikul liikuvatele sรตidukitele (sรคilitades vรตimaluse liikuda ka linnakeskkonnas).Training machine learning models for autonomous vehicles requires a lot of data which is time consuming and tedious to label manually. Simulated virtual environments help to automate this process. In this work these virtual environments are called simulations. The goal of this thesis is to survey the most suitable simulations for off-road vehicles (while not discarding the urban option). Only the simulations which provide labeled output data, are included in this work. The chosen 12 simulations are surveyed based on the information found online. The simulations are then analyzed based on the predefined features and categorized according to their suitability for training machine learning models for off-road vehicles. The results are shown in a table for comparison. The main purpose of this work is to map the seemingly large landscape of simulations and give a compact picture of the situation

    Driver lane change intention inference using machine learning methods.

    Get PDF
    Lane changing manoeuvre on highway is a highly interactive task for human drivers. The intelligent vehicles and the advanced driver assistance systems (ADAS) need to have proper awareness of the traffic context as well as the driver. The ADAS also need to understand the driver potential intent correctly since it shares the control authority with the human driver. This study provides a research on the driver intention inference, particular focus on the lane change manoeuvre on highways. This report is organised in a paper basis, where each chapter corresponding to a publication, which is submitted or to be submitted. Part โ…  introduce the motivation and general methodology framework for this thesis. Part โ…ก includes the literature survey and the state-of-art of driver intention inference. Part โ…ข contains the techniques for traffic context perception that focus on the lane detection. A literature review on lane detection techniques and its integration with parallel driving framework is proposed. Next, a novel integrated lane detection system is designed. Part โ…ฃ contains two parts, which provides the driver behaviour monitoring system for normal driving and secondary tasks detection. The first part is based on the conventional feature selection methods while the second part introduces an end-to-end deep learning framework. The design and analysis of driver lane change intention inference system for the lane change manoeuvre is proposed in Part โ…ค. Finally, discussions and conclusions are made in Part โ…ฅ. A major contribution of this project is to propose novel algorithms which accurately model the driver intention inference process. Lane change intention will be recognised based on machine learning (ML) methods due to its good reasoning and generalizing characteristics. Sensors in the vehicle are used to capture context traffic information, vehicle dynamics, and driver behaviours information. Machine learning and image processing are the techniques to recognise human driver behaviour.PhD in Transpor

    Evaluation and testing system for automotive LiDAR sensors

    Get PDF
    The world is facing a great technological transformation towards fully autonomous vehicles, where optimists predict that by 2030 autonomous vehicles will be sufficiently reliable, affordable, and common to displace most human driving. To cope with these trends, reliable perception systems must enable vehicles to hear and see all their surroundings, with light detection and ranging (LiDAR) sensors being a key instrument for recreating a 3D visualization of the world in real time. However, perception systems must rely on accurate measurements of the environment. Thus, these intelligent sensors must be calibrated and benchmarked before being placed on the market or assembled in a car. This article presents an Evaluation and Testing Platform for Automotive LiDAR sensors, with the main goal of testing both commercially available sensors and new sensor prototypes currently under development in Bosch Car Multimedia Portugal. The testing system can benchmark any LiDAR sensor under different conditions, recreating the expected driving environment in which such devices normally operate. To characterize and validate the sensor under test, the platform evaluates several parameters, such as the field of view (FoV), angular resolution, sensorโ€™s range, etc., based only on the point cloud output. This project is the result of a partnership between the University of Minho and Bosch Car Multimedia Portugal.This work was supported by the European Structural and Investment Funds in the FEDER component through the Operational Competitiveness and Internationalization Programme (COM-PETE 2020), Project nยบ 037902, Funding Reference POCI-01-0247-FEDER-037902

    Exploring the challenges and opportunities of image processing and sensor fusion in autonomous vehicles: A comprehensive review

    Get PDF
    Autonomous vehicles are at the forefront of future transportation solutions, but their success hinges on reliable perception. This review paper surveys image processing and sensor fusion techniques vital for ensuring vehicle safety and efficiency. The paper focuses on object detection, recognition, tracking, and scene comprehension via computer vision and machine learning methodologies. In addition, the paper explores challenges within the field, such as robustness in adverse weather conditions, the demand for real-time processing, and the integration of complex sensor data. Furthermore, we examine localization techniques specific to autonomous vehicles. The results show that while substantial progress has been made in each subfield, there are persistent limitations. These include a shortage of comprehensive large-scale testing, the absence of diverse and robust datasets, and occasional inaccuracies in certain studies. These issues impede the seamless deployment of this technology in real-world scenarios. This comprehensive literature review contributes to a deeper understanding of the current state and future directions of image processing and sensor fusion in autonomous vehicles, aiding researchers and practitioners in advancing the development of reliable autonomous driving systems
    corecore