709 research outputs found

    ์ฐจ๋Ÿ‰์šฉ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด ์„ค๊ณ„์— ๊ด€ํ•œ ์ธ๊ฐ„๊ณตํ•™ ์—ฐ๊ตฌ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์‚ฐ์—…๊ณตํ•™๊ณผ, 2020. 8. ๋ฐ•์šฐ์ง„.Head-up display (HUD) systems were introduced into the automobile industry as a means for improving driving safety. They superimpose safety-critical information on top of the drivers forward field of view and thereby help drivers keep their eyes forward while driving. Since the first introduction about three decades ago, automotive HUDs have been available in various commercial vehicles. Despite the long history and potential benefits of automotive HUDs, however, the design of useful automotive HUDs remains a challenging problem. In an effort to contribute to the design of useful automotive HUDs, this doctoral dissertation research conducted four studies. In Study 1, the functional requirements of automotive HUDs were investigated by reviewing the major automakers' automotive HUD products, academic research studies that proposed various automotive HUD functions, and previous research studies that surveyed drivers HUD information needs. The review results indicated that: 1) the existing commercial HUDs perform largely the same functions as the conventional in-vehicle displays, 2) past research studies proposed various HUD functions for improving driver situation awareness and driving safety, 3) autonomous driving and other new technologies are giving rise to new HUD information, and 4) little research is currently available on HUD users perceived information needs. Based on the review results, this study provides insights into the functional requirements of automotive HUDs and also suggests some future research directions for automotive HUD design. In Study 2, the interface design of automotive HUDs for communicating safety-related information was examined by reviewing the existing commercial HUDs and display concepts proposed by academic research studies. Each display was analyzed in terms of its functions, behaviors and structure. Also, related human factors display design principles, and, empirical findings on the effects of interface design decisions were reviewed when information was available. The results indicated that: 1) information characteristics suitable for the contact-analog and unregistered display formats, respectively, are still largely unknown, 2) new types of displays could be developed by combining or mixing existing displays or display elements at both the information and interface element levels, and 3) the human factors display principles need to be used properly according to the situation and only to the extent that the resulting display respects the limitations of the human information processing, and achieving balance among the principles is important to an effective design. On the basis of the review results, this review suggests design possibilities and future research directions on the interface design of safety-related automotive HUD systems. In Study 3, automotive HUD-based take-over request (TOR) displays were developed and evaluated in terms of drivers take-over performance and visual scanning behavior in a highly automated driving situation. Four different types of TOR displays were comparatively evaluated through a driving simulator study - they were: Baseline (an auditory beeping alert), Mini-map, Arrow, and Mini-map-and-Arrow. Baseline simply alerts an imminent take-over, and was always included when the other three displays were provided. Mini-map provides situational information. Arrow presents the action direction information for the take-over. Mini-map-and-Arrow provides the action direction together with the relevant situational information. This study also investigated the relationship between drivers initial trust in the TOR displays and take-over and visual scanning behavior. The results indicated that providing a combination of machine-made decision and situational information, such as Mini-map-and-Arrow, yielded the best results overall in the take-over scenario. Also, drivers initial trust in the TOR displays was found to have significant associations with the take-over and visual behavior of drivers. The higher trust group primarily relied on the proposed TOR displays, while the lower trust group tended to more check the situational information through the traditional displays, such as side-view or rear-view mirrors. In Study 4, the effect of interactive HUD imagery location on driving and secondary task performance, driver distraction, preference, and workload associated with use of scrolling list while driving were investigated. A total of nine HUD imagery locations of full-windshield were examined through a driving simulator study. The results indicated the HUD imagery location affected all the dependent measures, that is, driving and task performance, drivers visual distraction, preference and workload. Considering both objective and subjective evaluations, interactive HUDs should be placed near the driver's line of sight, especially near the left-bottom on the windshield.์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด๋Š” ์ฐจ๋‚ด ๋””์Šคํ”Œ๋ ˆ์ด ์ค‘ ํ•˜๋‚˜๋กœ ์šด์ „์ž์—๊ฒŒ ํ•„์š”ํ•œ ์ •๋ณด๋ฅผ ์ „๋ฐฉ์— ํ‘œ์‹œํ•จ์œผ๋กœ์จ, ์šด์ „์ž๊ฐ€ ์šด์ „์„ ํ•˜๋Š” ๋™์•ˆ ์ „๋ฐฉ์œผ๋กœ ์‹œ์„ ์„ ์œ ์ง€ํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋„์™€์ค€๋‹ค. ์ด๋ฅผ ํ†ตํ•ด ์šด์ „์ž์˜ ์ฃผ์˜ ๋ถ„์‚ฐ์„ ์ค„์ด๊ณ , ์•ˆ์ „์„ ํ–ฅ์ƒ์‹œํ‚ค๋Š”๋ฐ ๋„์›€์ด ๋  ์ˆ˜ ์žˆ๋‹ค. ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด ์‹œ์Šคํ…œ์€ ์•ฝ 30๋…„ ์ „ ์šด์ „์ž์˜ ์•ˆ์ „์„ ํ–ฅ์ƒ์‹œํ‚ค๊ธฐ ์œ„ํ•œ ์ˆ˜๋‹จ์œผ๋กœ ์ž๋™์ฐจ ์‚ฐ์—…์— ์ฒ˜์Œ ๋„์ž…๋œ ์ด๋ž˜๋กœ ํ˜„์žฌ๊นŒ์ง€ ๋‹ค์–‘ํ•œ ์ƒ์šฉ์ฐจ์—์„œ ์‚ฌ์šฉ๋˜๊ณ  ์žˆ๋‹ค. ์•ˆ์ „๊ณผ ํŽธ์˜ ์ธก๋ฉด์—์„œ ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์‚ฌ์šฉ์€ ์ ์  ๋” ์ฆ๊ฐ€ํ•  ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒ๋œ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด๋Ÿฌํ•œ ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์ž ์žฌ์  ์ด์ ๊ณผ ๋ฐœ์ „ ๊ฐ€๋Šฅ์„ฑ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ , ์œ ์šฉํ•œ ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด๋ฅผ ์„ค๊ณ„ํ•˜๋Š” ๊ฒƒ์€ ์—ฌ์ „ํžˆ ์–ด๋ ค์šด ๋ฌธ์ œ์ด๋‹ค. ์ด์— ๋ณธ ์—ฐ๊ตฌ๋Š” ์ด๋Ÿฌํ•œ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ณ , ๊ถ๊ทน์ ์œผ๋กœ ์œ ์šฉํ•œ ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด ์„ค๊ณ„์— ๊ธฐ์—ฌํ•˜๊ณ ์ž ์ด 4๊ฐ€์ง€ ์—ฐ๊ตฌ๋ฅผ ์ˆ˜ํ–‰ํ•˜์˜€๋‹ค. ์ฒซ ๋ฒˆ์งธ ์—ฐ๊ตฌ๋Š” ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ๊ธฐ๋Šฅ ์š”๊ตฌ ์‚ฌํ•ญ๊ณผ ๊ด€๋ จ๋œ ๊ฒƒ์œผ๋กœ์„œ, ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด ์‹œ์Šคํ…œ์„ ํ†ตํ•ด ์–ด๋–ค ์ •๋ณด๋ฅผ ์ œ๊ณตํ•  ๊ฒƒ์ธ๊ฐ€์— ๋Œ€ํ•œ ๋‹ต์„ ๊ตฌํ•˜๊ณ ์ž ํ•˜์˜€๋‹ค. ์ด์— ์ฃผ์š” ์ž๋™์ฐจ ์ œ์กฐ์—…์ฒด๋“ค์˜ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด ์ œํ’ˆ๋“ค๊ณผ, ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ๋‹ค์–‘ํ•œ ๊ธฐ๋Šฅ๋“ค์„ ์ œ์•ˆํ•œ ํ•™์ˆ  ์—ฐ๊ตฌ, ๊ทธ๋ฆฌ๊ณ  ์šด์ „์ž์˜ ์ •๋ณด ์š”๊ตฌ ์‚ฌํ•ญ๋“ค์„ ์ฒด๊ณ„์  ๋ฌธํ—Œ ๊ณ ์ฐฐ ๋ฐฉ๋ฒ•๋ก ์„ ํ†ตํ•ด ํฌ๊ด„์ ์œผ๋กœ ์กฐ์‚ฌํ•˜์˜€๋‹ค. ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ๊ธฐ๋Šฅ์  ์š”๊ตฌ ์‚ฌํ•ญ์— ๋Œ€ํ•˜์—ฌ ๊ฐœ๋ฐœ์ž, ์—ฐ๊ตฌ์ž, ์‚ฌ์šฉ์ž ์ธก๋ฉด์„ ๋ชจ๋‘ ๊ณ ๋ คํ•œ ํ†ตํ•ฉ๋œ ์ง€์‹์„ ์ „๋‹ฌํ•˜๊ณ , ์ด๋ฅผ ํ†ตํ•ด ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ๊ธฐ๋Šฅ ์š”๊ตฌ ์‚ฌํ•ญ์— ๋Œ€ํ•œ ํ–ฅํ›„ ์—ฐ๊ตฌ ๋ฐฉํ–ฅ์„ ์ œ์‹œํ•˜์˜€๋‹ค. ๋‘ ๋ฒˆ์งธ ์—ฐ๊ตฌ๋Š” ์•ˆ์ „ ๊ด€๋ จ ์ •๋ณด๋ฅผ ์ œ๊ณตํ•˜๋Š” ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์ธํ„ฐํŽ˜์ด์Šค ์„ค๊ณ„์™€ ๊ด€๋ จ๋œ ๊ฒƒ์œผ๋กœ, ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด ์‹œ์Šคํ…œ์„ ํ†ตํ•ด ์•ˆ์ „ ๊ด€๋ จ ์ •๋ณด๋ฅผ ์–ด๋–ป๊ฒŒ ์ œ๊ณตํ•  ๊ฒƒ์ธ๊ฐ€์— ๋Œ€ํ•œ ๋‹ต์„ ๊ตฌํ•˜๊ณ ์ž ํ•˜์˜€๋‹ค. ์‹ค์ œ ์ž๋™์ฐจ๋“ค์˜ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด ์‹œ์Šคํ…œ์—์„œ๋Š” ์–ด๋–ค ๋””์Šคํ”Œ๋ ˆ์ด ์ปจ์…‰๋“ค์ด ์‚ฌ์šฉ๋˜์—ˆ๋Š”์ง€, ๊ทธ๋ฆฌ๊ณ  ํ•™๊ณ„์—์„œ ์ œ์•ˆ๋œ ๋””์Šคํ”Œ๋ ˆ์ด ์ปจ์…‰๋“ค์—๋Š” ์–ด๋–ค ๊ฒƒ๋“ค์ด ์žˆ๋Š”์ง€ ์ฒด๊ณ„์  ๋ฌธํ—Œ ๊ณ ์ฐฐ ๋ฐฉ๋ฒ•๋ก ์„ ํ†ตํ•ด ๊ฒ€ํ† ํ•˜์˜€๋‹ค. ๊ฒ€ํ† ๋œ ๊ฒฐ๊ณผ๋Š” ๊ฐ ๋””์Šคํ”Œ๋ ˆ์ด์˜ ๊ธฐ๋Šฅ๊ณผ ๊ตฌ์กฐ, ๊ทธ๋ฆฌ๊ณ  ์ž‘๋™ ๋ฐฉ์‹์— ๋”ฐ๋ผ ์ •๋ฆฌ๋˜์—ˆ๊ณ , ๊ด€๋ จ๋œ ์ธ๊ฐ„๊ณตํ•™์  ๋””์Šคํ”Œ๋ ˆ์ด ์„ค๊ณ„ ์›์น™๊ณผ ์‹คํ—˜์  ์—ฐ๊ตฌ ๊ฒฐ๊ณผ๋“ค์„ ํ•จ๊ป˜ ๊ฒ€ํ† ํ•˜์˜€๋‹ค. ๊ฒ€ํ† ๋œ ๊ฒฐ๊ณผ๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ์•ˆ์ „ ๊ด€๋ จ ์ •๋ณด๋ฅผ ์ œ๊ณตํ•˜๋Š” ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์ธํ„ฐํŽ˜์ด์Šค ์„ค๊ณ„์— ๋Œ€ํ•œ ํ–ฅํ›„ ์—ฐ๊ตฌ ๋ฐฉํ–ฅ์„ ์ œ์‹œํ•˜์˜€๋‹ค. ์„ธ ๋ฒˆ์งธ ์—ฐ๊ตฌ๋Š” ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด ๊ธฐ๋ฐ˜์˜ ์ œ์–ด๊ถŒ ์ „ํ™˜ ๊ด€๋ จ ์ธํ„ฐํŽ˜์ด์Šค ์„ค๊ณ„์™€ ํ‰๊ฐ€์— ๊ด€ํ•œ ๊ฒƒ์ด๋‹ค. ์ œ์–ด๊ถŒ ์ „ํ™˜์ด๋ž€, ์ž์œจ์ฃผํ–‰ ์ƒํƒœ์—์„œ ์šด์ „์ž๊ฐ€ ์ง์ ‘ ์šด์ „์„ ํ•˜๋Š” ์ˆ˜๋™ ์šด์ „ ์ƒํƒœ๋กœ ์ „ํ™˜์ด ๋˜๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•œ๋‹ค. ๋”ฐ๋ผ์„œ ๊ฐ‘์ž‘์Šค๋Ÿฐ ์ œ์–ด๊ถŒ ์ „ํ™˜ ์š”์ฒญ์ด ๋ฐœ์ƒํ•˜๋Š” ๊ฒฝ์šฐ, ์šด์ „์ž๊ฐ€ ์•ˆ์ „ํ•˜๊ฒŒ ๋Œ€์ฒ˜ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ๋น ๋ฅธ ์ƒํ™ฉ ํŒŒ์•…๊ณผ ์˜์‚ฌ ๊ฒฐ์ •์ด ํ•„์š”ํ•˜๊ฒŒ ๋˜๊ณ , ์ด๋ฅผ ํšจ๊ณผ์ ์œผ๋กœ ๋„์™€์ฃผ๊ธฐ ์œ„ํ•œ ์ธํ„ฐํŽ˜์ด์Šค ์„ค๊ณ„์— ๋Œ€ํ•ด ์—ฐ๊ตฌํ•  ํ•„์š”์„ฑ์ด ์žˆ๋‹ค. ์ด์— ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด ๊ธฐ๋ฐ˜์˜ ์ด 4๊ฐœ์˜ ์ œ์–ด๊ถŒ ์ „ํ™˜ ๊ด€๋ จ ๋””์Šคํ”Œ๋ ˆ์ด(๊ธฐ์ค€ ๋””์Šคํ”Œ๋ ˆ์ด, ๋ฏธ๋‹ˆ๋งต ๋””์Šคํ”Œ๋ ˆ์ด, ํ™”์‚ดํ‘œ ๋””์Šคํ”Œ๋ ˆ์ด, ๋ฏธ๋‹ˆ๋งต๊ณผ ํ™”์‚ดํ‘œ ๋””์Šคํ”Œ๋ ˆ์ด)๋ฅผ ์ œ์•ˆํ•˜์˜€๊ณ , ์ œ์•ˆ๋œ ๋””์Šคํ”Œ๋ ˆ์ด ๋Œ€์•ˆ๋“ค์€ ์ฃผํ–‰ ์‹œ๋ฎฌ๋ ˆ์ดํ„ฐ ์‹คํ—˜์„ ํ†ตํ•ด ์ œ์–ด๊ถŒ ์ „ํ™˜ ์ˆ˜ํ–‰ ๋Šฅ๋ ฅ๊ณผ ์•ˆ๊ตฌ์˜ ์›€์ง์ž„ ํŒจํ„ด, ๊ทธ๋ฆฌ๊ณ  ์‚ฌ์šฉ์ž์˜ ์ฃผ๊ด€์  ํ‰๊ฐ€ ์ธก๋ฉด์—์„œ ํ‰๊ฐ€๋˜์—ˆ๋‹ค. ๋˜ํ•œ ์ œ์•ˆ๋œ ๋””์Šคํ”Œ๋ ˆ์ด ๋Œ€์•ˆ๋“ค์— ๋Œ€ํ•ด ์šด์ „์ž๋“ค์˜ ์ดˆ๊ธฐ ์‹ ๋ขฐ๋„ ๊ฐ’์„ ์ธก์ •ํ•˜์—ฌ ๊ฐ ๋””์Šคํ”Œ๋ ˆ์ด์— ๋”ฐ๋ฅธ ์šด์ „์ž๋“ค์˜ ํ‰๊ท  ์‹ ๋ขฐ๋„ ์ ์ˆ˜์— ๋”ฐ๋ผ ์ œ์–ด๊ถŒ ์ „ํ™˜ ์ˆ˜ํ–‰ ๋Šฅ๋ ฅ๊ณผ ์•ˆ๊ตฌ์˜ ์›€์ง์ž„ ํŒจํ„ด, ๊ทธ๋ฆฌ๊ณ  ์ฃผ๊ด€์  ํ‰๊ฐ€๊ฐ€ ์–ด๋–ป๊ฒŒ ๋‹ฌ๋ผ์ง€๋Š”์ง€ ๋ถ„์„ํ•˜์˜€๋‹ค. ์‹คํ—˜ ๊ฒฐ๊ณผ, ์ œ์–ด๊ถŒ ์ „ํ™˜ ์ƒํ™ฉ์—์„œ ์ž๋™ํ™”๋œ ์‹œ์Šคํ…œ์ด ์ œ์•ˆํ•˜๋Š” ์ •๋ณด์™€ ๊ทธ์™€ ๊ด€๋ จ๋œ ์ฃผ๋ณ€ ์ƒํ™ฉ ์ •๋ณด๋ฅผ ํ•จ๊ป˜ ์ œ์‹œํ•ด ์ฃผ๋Š” ๋””์Šคํ”Œ๋ ˆ์ด๊ฐ€ ๊ฐ€์žฅ ์ข‹์€ ๊ฒฐ๊ณผ๋ฅผ ๋ณด์—ฌ์ฃผ์—ˆ๋‹ค. ๋˜ํ•œ ๊ฐ ๋””์Šคํ”Œ๋ ˆ์ด์— ๋Œ€ํ•œ ์šด์ „์ž์˜ ์ดˆ๊ธฐ ์‹ ๋ขฐ๋„ ์ ์ˆ˜๋Š” ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์‹ค์ œ ์‚ฌ์šฉ ํ–‰ํƒœ์™€ ๋ฐ€์ ‘ํ•œ ๊ด€๋ จ์ด ์žˆ์Œ์„ ์•Œ ์ˆ˜ ์žˆ์—ˆ๋‹ค. ์‹ ๋ขฐ๋„ ์ ์ˆ˜์— ๋”ฐ๋ผ ์‹ ๋ขฐ๋„๊ฐ€ ๋†’์€ ๊ทธ๋ฃน๊ณผ ๋‚ฎ์€ ๊ทธ๋ฃน์œผ๋กœ ๋ถ„๋ฅ˜๋˜์—ˆ๊ณ , ์‹ ๋ขฐ๋„๊ฐ€ ๋†’์€ ๊ทธ๋ฃน์€ ์ œ์•ˆ๋œ ๋””์Šคํ”Œ๋ ˆ์ด๋“ค์ด ๋ณด์—ฌ์ฃผ๋Š” ์ •๋ณด๋ฅผ ์ฃผ๋กœ ๋ฏฟ๊ณ  ๋”ฐ๋ฅด๋Š” ๊ฒฝํ–ฅ์ด ์žˆ์—ˆ๋˜ ๋ฐ˜๋ฉด, ์‹ ๋ขฐ๋„๊ฐ€ ๋‚ฎ์€ ๊ทธ๋ฃน์€ ๋ฃธ ๋ฏธ๋Ÿฌ๋‚˜ ์‚ฌ์ด๋“œ ๋ฏธ๋Ÿฌ๋ฅผ ํ†ตํ•ด ์ฃผ๋ณ€ ์ƒํ™ฉ ์ •๋ณด๋ฅผ ๋” ํ™•์ธ ํ•˜๋Š” ๊ฒฝํ–ฅ์„ ๋ณด์˜€๋‹ค. ๋„ค ๋ฒˆ์งธ ์—ฐ๊ตฌ๋Š” ์ „๋ฉด ์œ ๋ฆฌ์ฐฝ์—์„œ์˜ ์ธํ„ฐ๋ž™ํ‹ฐ๋ธŒ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์ตœ์  ์œ„์น˜๋ฅผ ๊ฒฐ์ •ํ•˜๋Š” ๊ฒƒ์œผ๋กœ์„œ ์ฃผํ–‰ ์‹œ๋ฎฌ๋ ˆ์ดํ„ฐ ์‹คํ—˜์„ ํ†ตํ•ด ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์œ„์น˜์— ๋”ฐ๋ผ ์šด์ „์ž์˜ ์ฃผํ–‰ ์ˆ˜ํ–‰ ๋Šฅ๋ ฅ, ์ธํ„ฐ๋ž™ํ‹ฐ๋ธŒ ๋””์Šคํ”Œ๋ ˆ์ด ์กฐ์ž‘ ๊ด€๋ จ ๊ณผ์—… ์ˆ˜ํ–‰ ๋Šฅ๋ ฅ, ์‹œ๊ฐ์  ์ฃผ์˜ ๋ถ„์‚ฐ, ์„ ํ˜ธ๋„, ๊ทธ๋ฆฌ๊ณ  ์ž‘์—… ๋ถ€ํ•˜๊ฐ€ ํ‰๊ฐ€๋˜์—ˆ๋‹ค. ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์œ„์น˜๋Š” ์ „๋ฉด ์œ ๋ฆฌ์ฐฝ์—์„œ ์ผ์ •ํ•œ ๊ฐ„๊ฒฉ์œผ๋กœ ์ด 9๊ฐœ์˜ ์œ„์น˜๊ฐ€ ๊ณ ๋ ค๋˜์—ˆ๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ ํ™œ์šฉ๋œ ์ธํ„ฐ๋ž™ํ‹ฐ๋ธŒ ๋””์Šคํ”Œ๋ ˆ์ด๋Š” ์Œ์•… ์„ ํƒ์„ ์œ„ํ•œ ์Šคํฌ๋กค ๋ฐฉ์‹์˜ ๋‹จ์ผ ๋””์Šคํ”Œ๋ ˆ์ด์˜€๊ณ , ์šด์ „๋Œ€์— ์žฅ์ฐฉ๋œ ๋ฒ„ํŠผ์„ ํ†ตํ•ด ๋””์Šคํ”Œ๋ ˆ์ด๋ฅผ ์กฐ์ž‘ํ•˜์˜€๋‹ค. ์‹คํ—˜ ๊ฒฐ๊ณผ, ์ธํ„ฐ๋ž™ํ‹ฐ๋ธŒ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์œ„์น˜๊ฐ€ ๋ชจ๋“  ํ‰๊ฐ€ ์ฒ™๋„, ์ฆ‰ ์ฃผํ–‰ ์ˆ˜ํ–‰ ๋Šฅ๋ ฅ, ๋””์Šคํ”Œ๋ ˆ์ด ์กฐ์ž‘ ๊ณผ์—… ์ˆ˜ํ–‰ ๋Šฅ๋ ฅ, ์‹œ๊ฐ์  ์ฃผ์˜ ๋ถ„์‚ฐ, ์„ ํ˜ธ๋„, ๊ทธ๋ฆฌ๊ณ  ์ž‘์—… ๋ถ€ํ•˜์— ์˜ํ–ฅ์„ ๋ฏธ์นจ์„ ์•Œ ์ˆ˜ ์žˆ์—ˆ๋‹ค. ๋ชจ๋“  ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๊ณ ๋ คํ–ˆ์„ ๋•Œ, ์ธํ„ฐ๋ž™ํ‹ฐ๋ธŒ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์œ„์น˜๋Š” ์šด์ „์ž๊ฐ€ ๋˜‘๋ฐ”๋กœ ์ „๋ฐฉ์„ ๋ฐ”๋ผ๋ณผ ๋•Œ์˜ ์‹œ์•ผ ๊ตฌ๊ฐ„, ์ฆ‰ ์ „๋ฉด ์œ ๋ฆฌ์ฐฝ์—์„œ์˜ ์™ผ์ชฝ ์•„๋ž˜ ๋ถ€๊ทผ์ด ๊ฐ€์žฅ ์ตœ์ ์ธ ๊ฒƒ์œผ๋กœ ๋‚˜ํƒ€๋‚ฌ๋‹ค.Abstract i Contents v List of Tables ix List of Figures x Chapter 1 Introduction 1 1.1 Research Background 1 1.2 Research Objectives and Questions 8 1.3 Structure of the Thesis 11 Chapter 2 Functional Requirements of Automotive Head-Up Displays: A Systematic Review of Literature from 1994 to Present 13 2.1 Introduction 13 2.2 Method 15 2.3 Results 17 2.3.1 Information Types Displayed by Existing Commercial Automotive HUD Systems 17 2.3.2 Information Types Previously Suggested for Automotive HUDs by Research Studies 28 2.3.3 Information Types Required by Drivers (users) for Automotive HUDs and Their Relative Importance 35 2.4 Discussion 39 2.4.1 Information Types Displayed by Existing Commercial Automotive HUD Systems 39 2.4.2 Information Types Previously Suggested for Automotive HUDs by Research Studies 44 2.4.3 Information Types Required by Drivers (users) for Automotive HUDs and Their Relative Importance 48 Chapter 3 A Literature Review on Interface Design of Automotive Head-Up Displays for Communicating Safety-Related Information 50 3.1 Introduction 50 3.2 Method 52 3.3 Results 55 3.3.1 Commercial Automotive HUDs Presenting Safety-Related Information 55 3.3.2 Safety-Related HUDs Proposed by Academic Research 58 3.4 Discussion 74 Chapter 4 Development and Evaluation of Automotive Head-Up Displays for Take-Over Requests (TORs) in Highly Automated Vehicles 78 4.1 Introduction 78 4.2 Method 82 4.2.1 Participants 82 4.2.2 Apparatus 82 4.2.3 Automotive HUD-based TOR Displays 83 4.2.4 Driving Scenario 86 4.2.5 Experimental Design and Procedure 87 4.2.6 Experiment Variables 88 4.2.7 Statistical Analyses 91 4.3 Results 93 4.3.1 Comparison of the Proposed TOR Displays 93 4.3.2 Characteristics of Drivers Initial Trust in the four TOR Displays 102 4.3.3 Relationship between Drivers Initial Trust and Take-over and Visual Behavior 104 4.4 Discussion 113 4.4.1 Comparison of the Proposed TOR Displays 113 4.4.2 Characteristics of Drivers Initial Trust in the four TOR Displays 116 4.4.3 Relationship between Drivers Initial Trust and Take-over and Visual Behavior 117 4.5 Conclusion 119 Chapter 5 Human Factors Evaluation of Display Locations of an Interactive Scrolling List in a Full-windshield Automotive Head-Up Display System 121 5.1 Introduction 121 5.2 Method 122 5.2.1 Participants 122 5.2.2 Apparatus 123 5.2.3 Experimental Tasks and Driving Scenario 123 5.2.4 Experiment Variables 124 5.2.5 Experimental Design and Procedure 126 5.2.6 Statistical Analyses 126 5.3 Results 127 5.4 Discussion 133 5.5 Conclusion 135 Chapter 6 Conclusion 137 6.1 Summary and Implications 137 6.2 Future Research Directions 139 Bibliography 143 Apeendix A. Display Layouts of Some Commercial HUD Systems Appendix B. Safety-related Displays Provided by the Existing Commercial HUD Systems Appendix C. Safety-related HUD displays Proposed by Academic Research ๊ตญ๋ฌธ์ดˆ๋ก 187Docto

    A perspective on emerging automotive safety applications, derived from lessons learned through participation in the DARPA Grand Challenges

    Full text link
    This paper reports on various aspects of the Intelligent Vehicle Systems (IVS) team's involvement in the recent 2007 DARPA Urban Challenge, wherein our platform, the autonomous โ€œXAV-250,'' competed as one of the 11 finalists qualifying for the event. We provide a candid discussion of the hardware and software design process that led to our team's entry, along with lessons learned at this event and derived from participation in the two previous Grand Challenges. In addition, we give an overview of our vision-, radar-, and LIDAR-based perceptual sensing suite, its fusion with a military-grade inertial navigation package, and the map-based control and planning architectures used leading up to and during the event. The underlying theme of this article is to elucidate how the development of future automotive safety systems can potentially be accelerated by tackling the technological challenges of autonomous ground vehicle robotics. Of interest, we will discuss how a production manufacturing mindset imposes a unique set of constraints upon approaching the problem and how this worked for and against us, given the very compressed timeline of the contests. ยฉ 2008 Wiley Periodicals, Inc.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/61244/1/20264_ftp.pd

    Lane detection in autonomous vehicles : A systematic review

    Get PDF
    One of the essential systems in autonomous vehicles for ensuring a secure circumstance for drivers and passengers is the Advanced Driver Assistance System (ADAS). Adaptive Cruise Control, Automatic Braking/Steer Away, Lane-Keeping System, Blind Spot Assist, Lane Departure Warning System, and Lane Detection are examples of ADAS. Lane detection displays information specific to the geometrical features of lane line structures to the vehicle's intelligent system to show the position of lane markings. This article reviews the methods employed for lane detection in an autonomous vehicle. A systematic literature review (SLR) has been carried out to analyze the most delicate approach to detecting the road lane for the benefit of the automation industry. One hundred and two publications from well-known databases were chosen for this review. The trend was discovered after thoroughly examining the selected articles on the method implemented for detecting the road lane from 2018 until 2021. The selected literature used various methods, with the input dataset being one of two types: self-collected or acquired from an online public dataset. In the meantime, the methodologies include geometric modeling and traditional methods, while AI includes deep learning and machine learning. The use of deep learning has been increasingly researched throughout the last four years. Some studies used stand-Alone deep learning implementations for lane detection problems. Meanwhile, some research focuses on merging deep learning with other machine learning techniques and classical methodologies. Recent advancements imply that attention mechanism has become a popular combined strategy with deep learning methods. The use of deep algorithms in conjunction with other techniques showed promising outcomes. This research aims to provide a complete overview of the literature on lane detection methods, highlighting which approaches are currently being researched and the performance of existing state-of-The-Art techniques. Also, the paper covered the equipment used to collect the dataset for the training process and the dataset used for network training, validation, and testing. This review yields a valuable foundation on lane detection techniques, challenges, and opportunities and supports new research works in this automation field. For further study, it is suggested to put more effort into accuracy improvement, increased speed performance, and more challenging work on various extreme conditions in detecting the road lane

    Combined Learned and Classical Methods for Real-Time Visual Perception in Autonomous Driving

    Full text link
    Autonomy, robotics, and Artificial Intelligence (AI) are among the main defining themes of next-generation societies. Of the most important applications of said technologies is driving automation which spans from different Advanced Driver Assistance Systems (ADAS) to full self-driving vehicles. Driving automation is promising to reduce accidents, increase safety, and increase access to mobility for more people such as the elderly and the handicapped. However, one of the main challenges facing autonomous vehicles is robust perception which can enable safe interaction and decision making. With so many sensors to perceive the environment, each with its own capabilities and limitations, vision is by far one of the main sensing modalities. Cameras are cheap and can provide rich information of the observed scene. Therefore, this dissertation develops a set of visual perception algorithms with a focus on autonomous driving as the target application area. This dissertation starts by addressing the problem of real-time motion estimation of an agent using only the visual input from a camera attached to it, a problem known as visual odometry. The visual odometry algorithm can achieve low drift rates over long-traveled distances. This is made possible through the innovative local mapping approach used. This visual odometry algorithm was then combined with my multi-object detection and tracking system. The tracking system operates in a tracking-by-detection paradigm where an object detector based on convolution neural networks (CNNs) is used. Therefore, the combined system can detect and track other traffic participants both in image domain and in 3D world frame while simultaneously estimating vehicle motion. This is a necessary requirement for obstacle avoidance and safe navigation. Finally, the operational range of traditional monocular cameras was expanded with the capability to infer depth and thus replace stereo and RGB-D cameras. This is accomplished through a single-stream convolution neural network which can output both depth prediction and semantic segmentation. Semantic segmentation is the process of classifying each pixel in an image and is an important step toward scene understanding. Literature survey, algorithms descriptions, and comprehensive evaluations on real-world datasets are presented.Ph.D.College of Engineering & Computer ScienceUniversity of Michiganhttps://deepblue.lib.umich.edu/bitstream/2027.42/153989/1/Mohamed Aladem Final Dissertation.pdfDescription of Mohamed Aladem Final Dissertation.pdf : Dissertatio

    Synthetic Worlds for Improving Driver Assistance Systems

    Get PDF
    The automotive industry is evolving at a rapid pace, new technologies and techniques are being introduced in order to make the driving experience more pleasant and safer as compared to a few decades ago. But as with any new technology and methodology, there will always be new challenges to overcome. Advanced Driver Assistance systems has attracted a considerable amount of interest in the research community over the past few decades. This research dives into greater depths of how synthetic world simulations can be used to train the next generation of Advanced Driver Assistance Systems in order to detect and alert the driver of any possible risks and dangers during autonomous driving sessions. As Autonomous driving is still in the process of rolling out, we are far away from the point where Cars can truly be autonomous in any given environment and scenario and there are still quite a fair number of challenges to overcome. A number of semi autonomous cars are already on the road for a number of years. These include likes of Tesla, BMW \& Mercedes. But even more recently some of these cars have been involved in accidents which could have been avoided if a driver had control of the vehicle instead of the autonomous systems. This raises the question why are these cars of the future so prone to accidents and whats the best way to over come this problem. The answer lies in the use of synthetic worlds for designing more efficient ADAS in the least amount of time for the automobile of the future. This thesis explores a number of research areas starting from the development of an open source driver simulator that when compared to the state-of-the art is cheaper and efficient to deploy at almost any location. A typical driver simulator can cost between ยฃ10,000 to as much as ยฃ500,000. Our approach has brought this cost down to less than ยฃ2,000 while providing the same visual fidelity and accuracy of the more expensive simulators in the market. On the hardware side, our simulator consist of only 4 main components namely, CPU case, monitors Steering/pedal and webcams. This allows the simulator to be shipped to any location without the need of any complicated setup. When compared to other state-of-the-art simulators \cite{carla}, the setup and programming time is quite low, if a PRT based setup requires 10 days on state-of-the-art simulators then the same aspect can be programmed on our simulator in as little as 15 minutes as the simulator is designed from the ground up to be able to record accurate PRT. The simulator is then successfully used to record accurate Perception Reaction Times among 40 subjects under different driving conditions. The results highlight the fact that not all secondary tasks result in higher reaction times. Moreover, the overall reaction times for hands were recorded at 3.51 seconds whereas the feet were recorded at 2.47 seconds. The study highlights the importance of mental workloads during autonomous driving which is a vastly important aspect for designing ADAS. The novelty from this study resulted in the generation of a new dataset comprising of 1.44 million images targeted at driver vehicular interactions that can be used by researchers and engineers to develop advanced driver assistance systems. The simulator is then further modified to generate hi fidelity weather simulations which when compared to simulators like CARLA provide more control over how the cloud formations giving the researchers more variables to test during simulations and image generation. The resulting synthetic weather dataset called Weather Drive Dataset is unique and novel in nature as its the largest synthetic weather dataset currently available to researchers comprising of 108,333 images with varying weather conditions. Most of the state-of-the-art datasets only have non automotive based images or is not synthetic at all. The proposed dataset has been evaluated against Berkeley Deep Drive dataset which resulted in 74\% accuracy. This proved that synthetic nature of datasets are valid in training the next generation of vision based weather classifiers for autonomous driving. The studies performed will prove to be vital in progressing the Advanced Driver Assistance systems research forward in a number of different ways. The experiments take into account the necessary state of the art methods to compare and differentiate between the proposed methodologies. Most efficient approaches and best practices are also explained in detail which can provide the necessary support to other researchers to set up similar systems to aid in designing synthetic simulations for other research areas

    Visual Analysis in Traffic & Re-identification

    Get PDF

    Fusion of Data from Heterogeneous Sensors with Distributed Fields of View and Situation Evaluation for Advanced Driver Assistance Systems

    Get PDF
    In order to develop a driver assistance system for pedestrian protection, pedestrians in the environment of a truck are detected by radars and a camera and are tracked across distributed fields of view using a Joint Integrated Probabilistic Data Association filter. A robust approach for prediction of the system vehicles trajectory is presented. It serves the computation of a probabilistic collision risk based on reachable sets where different sources of uncertainty are taken into account

    Automatic segmentation and reconstruction of traffic accident scenarios from mobile laser scanning data

    Get PDF
    Virtual reconstruction of historic sites, planning of restorations and attachments of new building parts, as well as forest inventory are few examples of fields that benefit from the application of 3D surveying data. Originally using 2D photo based documentation and manual distance measurements, the 3D information obtained from multi camera and laser scanning systems realizes a noticeable improvement regarding the surveying times and the amount of generated 3D information. The 3D data allows a detailed post processing and better visualization of all relevant spatial information. Yet, for the extraction of the required information from the raw scan data and for the generation of useable visual output, time-consuming, complex user-based data processing is still required, using the commercially available 3D software tools. In this context, the automatic object recognition from 3D point cloud and depth data has been discussed in many different works. The developed tools and methods however, usually only focus on a certain kind of object or the detection of learned invariant surface shapes. Although the resulting methods are applicable for certain practices of data segmentation, they are not necessarily suitable for arbitrary tasks due to the varying requirements of the different fields of research. This thesis presents a more widespread solution for automatic scene reconstruction from 3D point clouds, targeting street scenarios, specifically for the task of traffic accident scene analysis and documentation. The data, obtained by sampling the scene using a mobile scanning system is evaluated, segmented, and finally used to generate detailed 3D information of the scanned environment. To realize this aim, this work adapts and validates various existing approaches on laser scan segmentation regarding the application on accident relevant scene information, including road surfaces and markings, vehicles, walls, trees and other salient objects. The approaches are therefore evaluated regarding their suitability and limitations for the given tasks, as well as for possibilities concerning the combined application together with other procedures. The obtained knowledge is used for the development of new algorithms and procedures to allow a satisfying segmentation and reconstruction of the scene, corresponding to the available sampling densities and precisions. Besides the segmentation of the point cloud data, this thesis presents different visualization and reconstruction methods to achieve a wider range of possible applications of the developed system for data export and utilization in different third party software tools
    • โ€ฆ
    corecore