241 research outputs found

    Context-Dependent Information Elements in the Car: Explorative Analysis of Static and Dynamic Head-Up-Displays

    Get PDF
    Head-up-displays (HUDs) illustrate a particular static number of information elements in the driverโ€™s primary field of view. Since the display can obscure the reality, a dynamic HUD presents context-dependent information elements. To become familiar with a user-optimal number of information elements and its essential information elements, we conducted a user study with n = 183 participants. We focused the context on an urban, a rural and a highway trip. Afterwards, a within-subject experiment using a high-fidelity driving simulator (n = 27) reveals the following: Dynamic HUDs significantly lower the average over speeding by 3.45 km/h compared to static HUDs. This speed above the speed limit equals 15.33% of the average speed in urban areas. Steering angle and speed can capture the context. Practitioners can use these findings to decrease the number of information elements in HUDs, thereby possibly increasing traffic safety

    ์ฐจ๋Ÿ‰์šฉ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด ์„ค๊ณ„์— ๊ด€ํ•œ ์ธ๊ฐ„๊ณตํ•™ ์—ฐ๊ตฌ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์‚ฐ์—…๊ณตํ•™๊ณผ, 2020. 8. ๋ฐ•์šฐ์ง„.Head-up display (HUD) systems were introduced into the automobile industry as a means for improving driving safety. They superimpose safety-critical information on top of the drivers forward field of view and thereby help drivers keep their eyes forward while driving. Since the first introduction about three decades ago, automotive HUDs have been available in various commercial vehicles. Despite the long history and potential benefits of automotive HUDs, however, the design of useful automotive HUDs remains a challenging problem. In an effort to contribute to the design of useful automotive HUDs, this doctoral dissertation research conducted four studies. In Study 1, the functional requirements of automotive HUDs were investigated by reviewing the major automakers' automotive HUD products, academic research studies that proposed various automotive HUD functions, and previous research studies that surveyed drivers HUD information needs. The review results indicated that: 1) the existing commercial HUDs perform largely the same functions as the conventional in-vehicle displays, 2) past research studies proposed various HUD functions for improving driver situation awareness and driving safety, 3) autonomous driving and other new technologies are giving rise to new HUD information, and 4) little research is currently available on HUD users perceived information needs. Based on the review results, this study provides insights into the functional requirements of automotive HUDs and also suggests some future research directions for automotive HUD design. In Study 2, the interface design of automotive HUDs for communicating safety-related information was examined by reviewing the existing commercial HUDs and display concepts proposed by academic research studies. Each display was analyzed in terms of its functions, behaviors and structure. Also, related human factors display design principles, and, empirical findings on the effects of interface design decisions were reviewed when information was available. The results indicated that: 1) information characteristics suitable for the contact-analog and unregistered display formats, respectively, are still largely unknown, 2) new types of displays could be developed by combining or mixing existing displays or display elements at both the information and interface element levels, and 3) the human factors display principles need to be used properly according to the situation and only to the extent that the resulting display respects the limitations of the human information processing, and achieving balance among the principles is important to an effective design. On the basis of the review results, this review suggests design possibilities and future research directions on the interface design of safety-related automotive HUD systems. In Study 3, automotive HUD-based take-over request (TOR) displays were developed and evaluated in terms of drivers take-over performance and visual scanning behavior in a highly automated driving situation. Four different types of TOR displays were comparatively evaluated through a driving simulator study - they were: Baseline (an auditory beeping alert), Mini-map, Arrow, and Mini-map-and-Arrow. Baseline simply alerts an imminent take-over, and was always included when the other three displays were provided. Mini-map provides situational information. Arrow presents the action direction information for the take-over. Mini-map-and-Arrow provides the action direction together with the relevant situational information. This study also investigated the relationship between drivers initial trust in the TOR displays and take-over and visual scanning behavior. The results indicated that providing a combination of machine-made decision and situational information, such as Mini-map-and-Arrow, yielded the best results overall in the take-over scenario. Also, drivers initial trust in the TOR displays was found to have significant associations with the take-over and visual behavior of drivers. The higher trust group primarily relied on the proposed TOR displays, while the lower trust group tended to more check the situational information through the traditional displays, such as side-view or rear-view mirrors. In Study 4, the effect of interactive HUD imagery location on driving and secondary task performance, driver distraction, preference, and workload associated with use of scrolling list while driving were investigated. A total of nine HUD imagery locations of full-windshield were examined through a driving simulator study. The results indicated the HUD imagery location affected all the dependent measures, that is, driving and task performance, drivers visual distraction, preference and workload. Considering both objective and subjective evaluations, interactive HUDs should be placed near the driver's line of sight, especially near the left-bottom on the windshield.์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด๋Š” ์ฐจ๋‚ด ๋””์Šคํ”Œ๋ ˆ์ด ์ค‘ ํ•˜๋‚˜๋กœ ์šด์ „์ž์—๊ฒŒ ํ•„์š”ํ•œ ์ •๋ณด๋ฅผ ์ „๋ฐฉ์— ํ‘œ์‹œํ•จ์œผ๋กœ์จ, ์šด์ „์ž๊ฐ€ ์šด์ „์„ ํ•˜๋Š” ๋™์•ˆ ์ „๋ฐฉ์œผ๋กœ ์‹œ์„ ์„ ์œ ์ง€ํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋„์™€์ค€๋‹ค. ์ด๋ฅผ ํ†ตํ•ด ์šด์ „์ž์˜ ์ฃผ์˜ ๋ถ„์‚ฐ์„ ์ค„์ด๊ณ , ์•ˆ์ „์„ ํ–ฅ์ƒ์‹œํ‚ค๋Š”๋ฐ ๋„์›€์ด ๋  ์ˆ˜ ์žˆ๋‹ค. ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด ์‹œ์Šคํ…œ์€ ์•ฝ 30๋…„ ์ „ ์šด์ „์ž์˜ ์•ˆ์ „์„ ํ–ฅ์ƒ์‹œํ‚ค๊ธฐ ์œ„ํ•œ ์ˆ˜๋‹จ์œผ๋กœ ์ž๋™์ฐจ ์‚ฐ์—…์— ์ฒ˜์Œ ๋„์ž…๋œ ์ด๋ž˜๋กœ ํ˜„์žฌ๊นŒ์ง€ ๋‹ค์–‘ํ•œ ์ƒ์šฉ์ฐจ์—์„œ ์‚ฌ์šฉ๋˜๊ณ  ์žˆ๋‹ค. ์•ˆ์ „๊ณผ ํŽธ์˜ ์ธก๋ฉด์—์„œ ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์‚ฌ์šฉ์€ ์ ์  ๋” ์ฆ๊ฐ€ํ•  ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒ๋œ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด๋Ÿฌํ•œ ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์ž ์žฌ์  ์ด์ ๊ณผ ๋ฐœ์ „ ๊ฐ€๋Šฅ์„ฑ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ , ์œ ์šฉํ•œ ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด๋ฅผ ์„ค๊ณ„ํ•˜๋Š” ๊ฒƒ์€ ์—ฌ์ „ํžˆ ์–ด๋ ค์šด ๋ฌธ์ œ์ด๋‹ค. ์ด์— ๋ณธ ์—ฐ๊ตฌ๋Š” ์ด๋Ÿฌํ•œ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ณ , ๊ถ๊ทน์ ์œผ๋กœ ์œ ์šฉํ•œ ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด ์„ค๊ณ„์— ๊ธฐ์—ฌํ•˜๊ณ ์ž ์ด 4๊ฐ€์ง€ ์—ฐ๊ตฌ๋ฅผ ์ˆ˜ํ–‰ํ•˜์˜€๋‹ค. ์ฒซ ๋ฒˆ์งธ ์—ฐ๊ตฌ๋Š” ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ๊ธฐ๋Šฅ ์š”๊ตฌ ์‚ฌํ•ญ๊ณผ ๊ด€๋ จ๋œ ๊ฒƒ์œผ๋กœ์„œ, ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด ์‹œ์Šคํ…œ์„ ํ†ตํ•ด ์–ด๋–ค ์ •๋ณด๋ฅผ ์ œ๊ณตํ•  ๊ฒƒ์ธ๊ฐ€์— ๋Œ€ํ•œ ๋‹ต์„ ๊ตฌํ•˜๊ณ ์ž ํ•˜์˜€๋‹ค. ์ด์— ์ฃผ์š” ์ž๋™์ฐจ ์ œ์กฐ์—…์ฒด๋“ค์˜ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด ์ œํ’ˆ๋“ค๊ณผ, ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ๋‹ค์–‘ํ•œ ๊ธฐ๋Šฅ๋“ค์„ ์ œ์•ˆํ•œ ํ•™์ˆ  ์—ฐ๊ตฌ, ๊ทธ๋ฆฌ๊ณ  ์šด์ „์ž์˜ ์ •๋ณด ์š”๊ตฌ ์‚ฌํ•ญ๋“ค์„ ์ฒด๊ณ„์  ๋ฌธํ—Œ ๊ณ ์ฐฐ ๋ฐฉ๋ฒ•๋ก ์„ ํ†ตํ•ด ํฌ๊ด„์ ์œผ๋กœ ์กฐ์‚ฌํ•˜์˜€๋‹ค. ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ๊ธฐ๋Šฅ์  ์š”๊ตฌ ์‚ฌํ•ญ์— ๋Œ€ํ•˜์—ฌ ๊ฐœ๋ฐœ์ž, ์—ฐ๊ตฌ์ž, ์‚ฌ์šฉ์ž ์ธก๋ฉด์„ ๋ชจ๋‘ ๊ณ ๋ คํ•œ ํ†ตํ•ฉ๋œ ์ง€์‹์„ ์ „๋‹ฌํ•˜๊ณ , ์ด๋ฅผ ํ†ตํ•ด ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ๊ธฐ๋Šฅ ์š”๊ตฌ ์‚ฌํ•ญ์— ๋Œ€ํ•œ ํ–ฅํ›„ ์—ฐ๊ตฌ ๋ฐฉํ–ฅ์„ ์ œ์‹œํ•˜์˜€๋‹ค. ๋‘ ๋ฒˆ์งธ ์—ฐ๊ตฌ๋Š” ์•ˆ์ „ ๊ด€๋ จ ์ •๋ณด๋ฅผ ์ œ๊ณตํ•˜๋Š” ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์ธํ„ฐํŽ˜์ด์Šค ์„ค๊ณ„์™€ ๊ด€๋ จ๋œ ๊ฒƒ์œผ๋กœ, ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด ์‹œ์Šคํ…œ์„ ํ†ตํ•ด ์•ˆ์ „ ๊ด€๋ จ ์ •๋ณด๋ฅผ ์–ด๋–ป๊ฒŒ ์ œ๊ณตํ•  ๊ฒƒ์ธ๊ฐ€์— ๋Œ€ํ•œ ๋‹ต์„ ๊ตฌํ•˜๊ณ ์ž ํ•˜์˜€๋‹ค. ์‹ค์ œ ์ž๋™์ฐจ๋“ค์˜ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด ์‹œ์Šคํ…œ์—์„œ๋Š” ์–ด๋–ค ๋””์Šคํ”Œ๋ ˆ์ด ์ปจ์…‰๋“ค์ด ์‚ฌ์šฉ๋˜์—ˆ๋Š”์ง€, ๊ทธ๋ฆฌ๊ณ  ํ•™๊ณ„์—์„œ ์ œ์•ˆ๋œ ๋””์Šคํ”Œ๋ ˆ์ด ์ปจ์…‰๋“ค์—๋Š” ์–ด๋–ค ๊ฒƒ๋“ค์ด ์žˆ๋Š”์ง€ ์ฒด๊ณ„์  ๋ฌธํ—Œ ๊ณ ์ฐฐ ๋ฐฉ๋ฒ•๋ก ์„ ํ†ตํ•ด ๊ฒ€ํ† ํ•˜์˜€๋‹ค. ๊ฒ€ํ† ๋œ ๊ฒฐ๊ณผ๋Š” ๊ฐ ๋””์Šคํ”Œ๋ ˆ์ด์˜ ๊ธฐ๋Šฅ๊ณผ ๊ตฌ์กฐ, ๊ทธ๋ฆฌ๊ณ  ์ž‘๋™ ๋ฐฉ์‹์— ๋”ฐ๋ผ ์ •๋ฆฌ๋˜์—ˆ๊ณ , ๊ด€๋ จ๋œ ์ธ๊ฐ„๊ณตํ•™์  ๋””์Šคํ”Œ๋ ˆ์ด ์„ค๊ณ„ ์›์น™๊ณผ ์‹คํ—˜์  ์—ฐ๊ตฌ ๊ฒฐ๊ณผ๋“ค์„ ํ•จ๊ป˜ ๊ฒ€ํ† ํ•˜์˜€๋‹ค. ๊ฒ€ํ† ๋œ ๊ฒฐ๊ณผ๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ์•ˆ์ „ ๊ด€๋ จ ์ •๋ณด๋ฅผ ์ œ๊ณตํ•˜๋Š” ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์ธํ„ฐํŽ˜์ด์Šค ์„ค๊ณ„์— ๋Œ€ํ•œ ํ–ฅํ›„ ์—ฐ๊ตฌ ๋ฐฉํ–ฅ์„ ์ œ์‹œํ•˜์˜€๋‹ค. ์„ธ ๋ฒˆ์งธ ์—ฐ๊ตฌ๋Š” ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด ๊ธฐ๋ฐ˜์˜ ์ œ์–ด๊ถŒ ์ „ํ™˜ ๊ด€๋ จ ์ธํ„ฐํŽ˜์ด์Šค ์„ค๊ณ„์™€ ํ‰๊ฐ€์— ๊ด€ํ•œ ๊ฒƒ์ด๋‹ค. ์ œ์–ด๊ถŒ ์ „ํ™˜์ด๋ž€, ์ž์œจ์ฃผํ–‰ ์ƒํƒœ์—์„œ ์šด์ „์ž๊ฐ€ ์ง์ ‘ ์šด์ „์„ ํ•˜๋Š” ์ˆ˜๋™ ์šด์ „ ์ƒํƒœ๋กœ ์ „ํ™˜์ด ๋˜๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•œ๋‹ค. ๋”ฐ๋ผ์„œ ๊ฐ‘์ž‘์Šค๋Ÿฐ ์ œ์–ด๊ถŒ ์ „ํ™˜ ์š”์ฒญ์ด ๋ฐœ์ƒํ•˜๋Š” ๊ฒฝ์šฐ, ์šด์ „์ž๊ฐ€ ์•ˆ์ „ํ•˜๊ฒŒ ๋Œ€์ฒ˜ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ๋น ๋ฅธ ์ƒํ™ฉ ํŒŒ์•…๊ณผ ์˜์‚ฌ ๊ฒฐ์ •์ด ํ•„์š”ํ•˜๊ฒŒ ๋˜๊ณ , ์ด๋ฅผ ํšจ๊ณผ์ ์œผ๋กœ ๋„์™€์ฃผ๊ธฐ ์œ„ํ•œ ์ธํ„ฐํŽ˜์ด์Šค ์„ค๊ณ„์— ๋Œ€ํ•ด ์—ฐ๊ตฌํ•  ํ•„์š”์„ฑ์ด ์žˆ๋‹ค. ์ด์— ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด ๊ธฐ๋ฐ˜์˜ ์ด 4๊ฐœ์˜ ์ œ์–ด๊ถŒ ์ „ํ™˜ ๊ด€๋ จ ๋””์Šคํ”Œ๋ ˆ์ด(๊ธฐ์ค€ ๋””์Šคํ”Œ๋ ˆ์ด, ๋ฏธ๋‹ˆ๋งต ๋””์Šคํ”Œ๋ ˆ์ด, ํ™”์‚ดํ‘œ ๋””์Šคํ”Œ๋ ˆ์ด, ๋ฏธ๋‹ˆ๋งต๊ณผ ํ™”์‚ดํ‘œ ๋””์Šคํ”Œ๋ ˆ์ด)๋ฅผ ์ œ์•ˆํ•˜์˜€๊ณ , ์ œ์•ˆ๋œ ๋””์Šคํ”Œ๋ ˆ์ด ๋Œ€์•ˆ๋“ค์€ ์ฃผํ–‰ ์‹œ๋ฎฌ๋ ˆ์ดํ„ฐ ์‹คํ—˜์„ ํ†ตํ•ด ์ œ์–ด๊ถŒ ์ „ํ™˜ ์ˆ˜ํ–‰ ๋Šฅ๋ ฅ๊ณผ ์•ˆ๊ตฌ์˜ ์›€์ง์ž„ ํŒจํ„ด, ๊ทธ๋ฆฌ๊ณ  ์‚ฌ์šฉ์ž์˜ ์ฃผ๊ด€์  ํ‰๊ฐ€ ์ธก๋ฉด์—์„œ ํ‰๊ฐ€๋˜์—ˆ๋‹ค. ๋˜ํ•œ ์ œ์•ˆ๋œ ๋””์Šคํ”Œ๋ ˆ์ด ๋Œ€์•ˆ๋“ค์— ๋Œ€ํ•ด ์šด์ „์ž๋“ค์˜ ์ดˆ๊ธฐ ์‹ ๋ขฐ๋„ ๊ฐ’์„ ์ธก์ •ํ•˜์—ฌ ๊ฐ ๋””์Šคํ”Œ๋ ˆ์ด์— ๋”ฐ๋ฅธ ์šด์ „์ž๋“ค์˜ ํ‰๊ท  ์‹ ๋ขฐ๋„ ์ ์ˆ˜์— ๋”ฐ๋ผ ์ œ์–ด๊ถŒ ์ „ํ™˜ ์ˆ˜ํ–‰ ๋Šฅ๋ ฅ๊ณผ ์•ˆ๊ตฌ์˜ ์›€์ง์ž„ ํŒจํ„ด, ๊ทธ๋ฆฌ๊ณ  ์ฃผ๊ด€์  ํ‰๊ฐ€๊ฐ€ ์–ด๋–ป๊ฒŒ ๋‹ฌ๋ผ์ง€๋Š”์ง€ ๋ถ„์„ํ•˜์˜€๋‹ค. ์‹คํ—˜ ๊ฒฐ๊ณผ, ์ œ์–ด๊ถŒ ์ „ํ™˜ ์ƒํ™ฉ์—์„œ ์ž๋™ํ™”๋œ ์‹œ์Šคํ…œ์ด ์ œ์•ˆํ•˜๋Š” ์ •๋ณด์™€ ๊ทธ์™€ ๊ด€๋ จ๋œ ์ฃผ๋ณ€ ์ƒํ™ฉ ์ •๋ณด๋ฅผ ํ•จ๊ป˜ ์ œ์‹œํ•ด ์ฃผ๋Š” ๋””์Šคํ”Œ๋ ˆ์ด๊ฐ€ ๊ฐ€์žฅ ์ข‹์€ ๊ฒฐ๊ณผ๋ฅผ ๋ณด์—ฌ์ฃผ์—ˆ๋‹ค. ๋˜ํ•œ ๊ฐ ๋””์Šคํ”Œ๋ ˆ์ด์— ๋Œ€ํ•œ ์šด์ „์ž์˜ ์ดˆ๊ธฐ ์‹ ๋ขฐ๋„ ์ ์ˆ˜๋Š” ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์‹ค์ œ ์‚ฌ์šฉ ํ–‰ํƒœ์™€ ๋ฐ€์ ‘ํ•œ ๊ด€๋ จ์ด ์žˆ์Œ์„ ์•Œ ์ˆ˜ ์žˆ์—ˆ๋‹ค. ์‹ ๋ขฐ๋„ ์ ์ˆ˜์— ๋”ฐ๋ผ ์‹ ๋ขฐ๋„๊ฐ€ ๋†’์€ ๊ทธ๋ฃน๊ณผ ๋‚ฎ์€ ๊ทธ๋ฃน์œผ๋กœ ๋ถ„๋ฅ˜๋˜์—ˆ๊ณ , ์‹ ๋ขฐ๋„๊ฐ€ ๋†’์€ ๊ทธ๋ฃน์€ ์ œ์•ˆ๋œ ๋””์Šคํ”Œ๋ ˆ์ด๋“ค์ด ๋ณด์—ฌ์ฃผ๋Š” ์ •๋ณด๋ฅผ ์ฃผ๋กœ ๋ฏฟ๊ณ  ๋”ฐ๋ฅด๋Š” ๊ฒฝํ–ฅ์ด ์žˆ์—ˆ๋˜ ๋ฐ˜๋ฉด, ์‹ ๋ขฐ๋„๊ฐ€ ๋‚ฎ์€ ๊ทธ๋ฃน์€ ๋ฃธ ๋ฏธ๋Ÿฌ๋‚˜ ์‚ฌ์ด๋“œ ๋ฏธ๋Ÿฌ๋ฅผ ํ†ตํ•ด ์ฃผ๋ณ€ ์ƒํ™ฉ ์ •๋ณด๋ฅผ ๋” ํ™•์ธ ํ•˜๋Š” ๊ฒฝํ–ฅ์„ ๋ณด์˜€๋‹ค. ๋„ค ๋ฒˆ์งธ ์—ฐ๊ตฌ๋Š” ์ „๋ฉด ์œ ๋ฆฌ์ฐฝ์—์„œ์˜ ์ธํ„ฐ๋ž™ํ‹ฐ๋ธŒ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์ตœ์  ์œ„์น˜๋ฅผ ๊ฒฐ์ •ํ•˜๋Š” ๊ฒƒ์œผ๋กœ์„œ ์ฃผํ–‰ ์‹œ๋ฎฌ๋ ˆ์ดํ„ฐ ์‹คํ—˜์„ ํ†ตํ•ด ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์œ„์น˜์— ๋”ฐ๋ผ ์šด์ „์ž์˜ ์ฃผํ–‰ ์ˆ˜ํ–‰ ๋Šฅ๋ ฅ, ์ธํ„ฐ๋ž™ํ‹ฐ๋ธŒ ๋””์Šคํ”Œ๋ ˆ์ด ์กฐ์ž‘ ๊ด€๋ จ ๊ณผ์—… ์ˆ˜ํ–‰ ๋Šฅ๋ ฅ, ์‹œ๊ฐ์  ์ฃผ์˜ ๋ถ„์‚ฐ, ์„ ํ˜ธ๋„, ๊ทธ๋ฆฌ๊ณ  ์ž‘์—… ๋ถ€ํ•˜๊ฐ€ ํ‰๊ฐ€๋˜์—ˆ๋‹ค. ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์œ„์น˜๋Š” ์ „๋ฉด ์œ ๋ฆฌ์ฐฝ์—์„œ ์ผ์ •ํ•œ ๊ฐ„๊ฒฉ์œผ๋กœ ์ด 9๊ฐœ์˜ ์œ„์น˜๊ฐ€ ๊ณ ๋ ค๋˜์—ˆ๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ ํ™œ์šฉ๋œ ์ธํ„ฐ๋ž™ํ‹ฐ๋ธŒ ๋””์Šคํ”Œ๋ ˆ์ด๋Š” ์Œ์•… ์„ ํƒ์„ ์œ„ํ•œ ์Šคํฌ๋กค ๋ฐฉ์‹์˜ ๋‹จ์ผ ๋””์Šคํ”Œ๋ ˆ์ด์˜€๊ณ , ์šด์ „๋Œ€์— ์žฅ์ฐฉ๋œ ๋ฒ„ํŠผ์„ ํ†ตํ•ด ๋””์Šคํ”Œ๋ ˆ์ด๋ฅผ ์กฐ์ž‘ํ•˜์˜€๋‹ค. ์‹คํ—˜ ๊ฒฐ๊ณผ, ์ธํ„ฐ๋ž™ํ‹ฐ๋ธŒ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์œ„์น˜๊ฐ€ ๋ชจ๋“  ํ‰๊ฐ€ ์ฒ™๋„, ์ฆ‰ ์ฃผํ–‰ ์ˆ˜ํ–‰ ๋Šฅ๋ ฅ, ๋””์Šคํ”Œ๋ ˆ์ด ์กฐ์ž‘ ๊ณผ์—… ์ˆ˜ํ–‰ ๋Šฅ๋ ฅ, ์‹œ๊ฐ์  ์ฃผ์˜ ๋ถ„์‚ฐ, ์„ ํ˜ธ๋„, ๊ทธ๋ฆฌ๊ณ  ์ž‘์—… ๋ถ€ํ•˜์— ์˜ํ–ฅ์„ ๋ฏธ์นจ์„ ์•Œ ์ˆ˜ ์žˆ์—ˆ๋‹ค. ๋ชจ๋“  ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๊ณ ๋ คํ–ˆ์„ ๋•Œ, ์ธํ„ฐ๋ž™ํ‹ฐ๋ธŒ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์œ„์น˜๋Š” ์šด์ „์ž๊ฐ€ ๋˜‘๋ฐ”๋กœ ์ „๋ฐฉ์„ ๋ฐ”๋ผ๋ณผ ๋•Œ์˜ ์‹œ์•ผ ๊ตฌ๊ฐ„, ์ฆ‰ ์ „๋ฉด ์œ ๋ฆฌ์ฐฝ์—์„œ์˜ ์™ผ์ชฝ ์•„๋ž˜ ๋ถ€๊ทผ์ด ๊ฐ€์žฅ ์ตœ์ ์ธ ๊ฒƒ์œผ๋กœ ๋‚˜ํƒ€๋‚ฌ๋‹ค.Abstract i Contents v List of Tables ix List of Figures x Chapter 1 Introduction 1 1.1 Research Background 1 1.2 Research Objectives and Questions 8 1.3 Structure of the Thesis 11 Chapter 2 Functional Requirements of Automotive Head-Up Displays: A Systematic Review of Literature from 1994 to Present 13 2.1 Introduction 13 2.2 Method 15 2.3 Results 17 2.3.1 Information Types Displayed by Existing Commercial Automotive HUD Systems 17 2.3.2 Information Types Previously Suggested for Automotive HUDs by Research Studies 28 2.3.3 Information Types Required by Drivers (users) for Automotive HUDs and Their Relative Importance 35 2.4 Discussion 39 2.4.1 Information Types Displayed by Existing Commercial Automotive HUD Systems 39 2.4.2 Information Types Previously Suggested for Automotive HUDs by Research Studies 44 2.4.3 Information Types Required by Drivers (users) for Automotive HUDs and Their Relative Importance 48 Chapter 3 A Literature Review on Interface Design of Automotive Head-Up Displays for Communicating Safety-Related Information 50 3.1 Introduction 50 3.2 Method 52 3.3 Results 55 3.3.1 Commercial Automotive HUDs Presenting Safety-Related Information 55 3.3.2 Safety-Related HUDs Proposed by Academic Research 58 3.4 Discussion 74 Chapter 4 Development and Evaluation of Automotive Head-Up Displays for Take-Over Requests (TORs) in Highly Automated Vehicles 78 4.1 Introduction 78 4.2 Method 82 4.2.1 Participants 82 4.2.2 Apparatus 82 4.2.3 Automotive HUD-based TOR Displays 83 4.2.4 Driving Scenario 86 4.2.5 Experimental Design and Procedure 87 4.2.6 Experiment Variables 88 4.2.7 Statistical Analyses 91 4.3 Results 93 4.3.1 Comparison of the Proposed TOR Displays 93 4.3.2 Characteristics of Drivers Initial Trust in the four TOR Displays 102 4.3.3 Relationship between Drivers Initial Trust and Take-over and Visual Behavior 104 4.4 Discussion 113 4.4.1 Comparison of the Proposed TOR Displays 113 4.4.2 Characteristics of Drivers Initial Trust in the four TOR Displays 116 4.4.3 Relationship between Drivers Initial Trust and Take-over and Visual Behavior 117 4.5 Conclusion 119 Chapter 5 Human Factors Evaluation of Display Locations of an Interactive Scrolling List in a Full-windshield Automotive Head-Up Display System 121 5.1 Introduction 121 5.2 Method 122 5.2.1 Participants 122 5.2.2 Apparatus 123 5.2.3 Experimental Tasks and Driving Scenario 123 5.2.4 Experiment Variables 124 5.2.5 Experimental Design and Procedure 126 5.2.6 Statistical Analyses 126 5.3 Results 127 5.4 Discussion 133 5.5 Conclusion 135 Chapter 6 Conclusion 137 6.1 Summary and Implications 137 6.2 Future Research Directions 139 Bibliography 143 Apeendix A. Display Layouts of Some Commercial HUD Systems Appendix B. Safety-related Displays Provided by the Existing Commercial HUD Systems Appendix C. Safety-related HUD displays Proposed by Academic Research ๊ตญ๋ฌธ์ดˆ๋ก 187Docto

    LiDAR-derived digital holograms for automotive head-up displays.

    Get PDF
    A holographic automotive head-up display was developed to project 2D and 3D ultra-high definition (UHD) images using LiDAR data in the driver's field of view. The LiDAR data was collected with a 3D terrestrial laser scanner and was converted to computer-generated holograms (CGHs). The reconstructions were obtained with a HeNe laser and a UHD spatial light modulator with a panel resolution of 3840ร—2160 px for replay field projections. By decreasing the focal distance of the CGHs, the zero-order spot was diffused into the holographic replay field image. 3D holograms were observed floating as a ghost image at a variable focal distance with a digital Fresnel lens into the CGH and a concave lens.This project was funded by the EPSRC Centre for Doctoral Training in Connected Electronic and Photonic Systems (CEPS) (EP/S022139/1), Project Reference: 2249444

    Human factors in instructional augmented reality for intravehicular spaceflight activities and How gravity influences the setup of interfaces operated by direct object selection

    Get PDF
    In human spaceflight, advanced user interfaces are becoming an interesting mean to facilitate human-machine interaction, enhancing and guaranteeing the sequences of intravehicular space operations. The efforts made to ease such operations have shown strong interests in novel human-computer interaction like Augmented Reality (AR). The work presented in this thesis is directed towards a user-driven design for AR-assisted space operations, iteratively solving issues arisen from the problem space, which also includes the consideration of the effect of altered gravity on handling such interfaces.Auch in der bemannten Raumfahrt steigt das Interesse an neuartigen Benutzerschnittstellen, um nicht nur die Mensch-Maschine-Interaktion effektiver zu gestalten, sondern auch um einen korrekten Arbeitsablauf sicherzustellen. In der Vergangenheit wurden wiederholt Anstrengungen unternommen, Innenbordarbeiten mit Hilfe von Augmented Reality (AR) zu erleichtern. Diese Arbeit konzentriert sich auf einen nutzerorientierten AR-Ansatz, welcher zum Ziel hat, die Probleme schrittweise in einem iterativen Designprozess zu lรถsen. Dies erfordert auch die Berรผcksichtigung verรคnderter Schwerkraftbedingungen

    Augmented Reality for Railroad Operations Using Head-up Displays

    Get PDF
    693JJ6-18-C-000010A team from MIT\u2019s Human Systems Laboratory designed the locomotive HUD as a wide field of view augmented reality head-up display (AR-HUD). The technical feasibility of an AR-HUD was assessed through literature review and hardware tests. To study human factors issues, an AR-HUD prototype was designed, reviewed by experienced engineers, then implemented in the FRA Cab Technology Integration Laboratory simulator. The engineers\u2019 behavior was not significantly altered and using the AR-HUD reduced the time spent looking away from the forward view. Subjective feedback from the engineers confirmed the acceptability and potential benefit of using HUDs

    Optimizing The Design Of Multimodal User Interfaces

    Get PDF
    Due to a current lack of principle-driven multimodal user interface design guidelines, designers may encounter difficulties when choosing the most appropriate display modality for given users or specific tasks (e.g., verbal versus spatial tasks). The development of multimodal display guidelines from both a user and task domain perspective is thus critical to the achievement of successful human-system interaction. Specifically, there is a need to determine how to design task information presentation (e.g., via which modalities) to capitalize on an individual operator\u27s information processing capabilities and the inherent efficiencies associated with redundant sensory information, thereby alleviating information overload. The present effort addresses this issue by proposing a theoretical framework (Architecture for Multi-Modal Optimization, AMMO) from which multimodal display design guidelines and adaptive automation strategies may be derived. The foundation of the proposed framework is based on extending, at a functional working memory (WM) level, existing information processing theories and models with the latest findings in cognitive psychology, neuroscience, and other allied sciences. The utility of AMMO lies in its ability to provide designers with strategies for directing system design, as well as dynamic adaptation strategies (i.e., multimodal mitigation strategies) in support of real-time operations. In an effort to validate specific components of AMMO, a subset of AMMO-derived multimodal design guidelines was evaluated with a simulated weapons control system multitasking environment. The results of this study demonstrated significant performance improvements in user response time and accuracy when multimodal display cues were used (i.e., auditory and tactile, individually and in combination) to augment the visual display of information, thereby distributing human information processing resources across multiple sensory and WM resources. These results provide initial empirical support for validation of the overall AMMO model and a sub-set of the principle-driven multimodal design guidelines derived from it. The empirically-validated multimodal design guidelines may be applicable to a wide range of information-intensive computer-based multitasking environments

    On Inter-referential Awareness in Collaborative Augmented Reality

    Get PDF
    For successful collaboration to occur, a workspace must support inter-referential awareness - or the ability for one participant to refer to a set of artifacts in the environment, and for that reference to be correctly interpreted by others. While referring to objects in our everyday environment is a straight-forward task, the non-tangible nature of digital artifacts presents us with new interaction challenges. Augmented reality (AR) is inextricably linked to the physical world, and it is natural to believe that the re-integration of physical artifacts into the workspace makes referencing tasks easier; however, we find that these environments combine the referencing challenges from several computing disciplines, which compound across scenarios. This dissertation presents our studies of this form of awareness in collaborative AR environments. It stems from our research in developing mixed reality environments for molecular modeling, where we explored spatial and multi-modal referencing techniques. To encapsulate the myriad of factors found in collaborative AR, we present a generic, theoretical framework and apply it to analyze this domain. Because referencing is a very human-centric activity, we present the results of an exploratory study which examines the behaviors of participants and how they generate references to physical and virtual content in co-located and remote scenarios; we found that participants refer to content using physical and virtual techniques, and that shared video is highly effective in disambiguating references in remote environments. By implementing user feedback from this study, a follow-up study explores how the environment can passively support referencing, where we discovered the role that virtual referencing plays during collaboration. A third study was conducted in order to better understand the effectiveness of giving and interpreting references using a virtual pointer; the results suggest the need for participants to be parallel with the arrow vector (strengthening the argument for shared viewpoints), as well as the importance of shadows in non-stereoscopic environments. Our contributions include a framework for analyzing the domain of inter-referential awareness, the development of novel referencing techniques, the presentation and analysis of our findings from multiple user studies, and a set of guidelines to help designers support this form of awareness
    • โ€ฆ
    corecore