521 research outputs found

    Novel Deep Learning Models for Medical Imaging Analysis

    Get PDF
    abstract: Deep learning is a sub-field of machine learning in which models are developed to imitate the workings of the human brain in processing data and creating patterns for decision making. This dissertation is focused on developing deep learning models for medical imaging analysis of different modalities for different tasks including detection, segmentation and classification. Imaging modalities including digital mammography (DM), magnetic resonance imaging (MRI), positron emission tomography (PET) and computed tomography (CT) are studied in the dissertation for various medical applications. The first phase of the research is to develop a novel shallow-deep convolutional neural network (SD-CNN) model for improved breast cancer diagnosis. This model takes one type of medical image as input and synthesizes different modalities for additional feature sources; both original image and synthetic image are used for feature generation. This proposed architecture is validated in the application of breast cancer diagnosis and proved to be outperforming the competing models. Motivated by the success from the first phase, the second phase focuses on improving medical imaging synthesis performance with advanced deep learning architecture. A new architecture named deep residual inception encoder-decoder network (RIED-Net) is proposed. RIED-Net has the advantages of preserving pixel-level information and cross-modality feature transferring. The applicability of RIED-Net is validated in breast cancer diagnosis and Alzheimerโ€™s disease (AD) staging. Recognizing medical imaging research often has multiples inter-related tasks, namely, detection, segmentation and classification, my third phase of the research is to develop a multi-task deep learning model. Specifically, a feature transfer enabled multi-task deep learning model (FT-MTL-Net) is proposed to transfer high-resolution features from segmentation task to low-resolution feature-based classification task. The application of FT-MTL-Net on breast cancer detection, segmentation and classification using DM images is studied. As a continuing effort on exploring the transfer learning in deep models for medical application, the last phase is to develop a deep learning model for both feature transfer and knowledge from pre-training age prediction task to new domain of Mild cognitive impairment (MCI) to AD conversion prediction task. It is validated in the application of predicting MCI patientsโ€™ conversion to AD with 3D MRI images.Dissertation/ThesisDoctoral Dissertation Industrial Engineering 201

    Detail synthesis for image-based texturing

    Get PDF

    ์ฐจ๋Ÿ‰์šฉ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด ์„ค๊ณ„์— ๊ด€ํ•œ ์ธ๊ฐ„๊ณตํ•™ ์—ฐ๊ตฌ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์‚ฐ์—…๊ณตํ•™๊ณผ, 2020. 8. ๋ฐ•์šฐ์ง„.Head-up display (HUD) systems were introduced into the automobile industry as a means for improving driving safety. They superimpose safety-critical information on top of the drivers forward field of view and thereby help drivers keep their eyes forward while driving. Since the first introduction about three decades ago, automotive HUDs have been available in various commercial vehicles. Despite the long history and potential benefits of automotive HUDs, however, the design of useful automotive HUDs remains a challenging problem. In an effort to contribute to the design of useful automotive HUDs, this doctoral dissertation research conducted four studies. In Study 1, the functional requirements of automotive HUDs were investigated by reviewing the major automakers' automotive HUD products, academic research studies that proposed various automotive HUD functions, and previous research studies that surveyed drivers HUD information needs. The review results indicated that: 1) the existing commercial HUDs perform largely the same functions as the conventional in-vehicle displays, 2) past research studies proposed various HUD functions for improving driver situation awareness and driving safety, 3) autonomous driving and other new technologies are giving rise to new HUD information, and 4) little research is currently available on HUD users perceived information needs. Based on the review results, this study provides insights into the functional requirements of automotive HUDs and also suggests some future research directions for automotive HUD design. In Study 2, the interface design of automotive HUDs for communicating safety-related information was examined by reviewing the existing commercial HUDs and display concepts proposed by academic research studies. Each display was analyzed in terms of its functions, behaviors and structure. Also, related human factors display design principles, and, empirical findings on the effects of interface design decisions were reviewed when information was available. The results indicated that: 1) information characteristics suitable for the contact-analog and unregistered display formats, respectively, are still largely unknown, 2) new types of displays could be developed by combining or mixing existing displays or display elements at both the information and interface element levels, and 3) the human factors display principles need to be used properly according to the situation and only to the extent that the resulting display respects the limitations of the human information processing, and achieving balance among the principles is important to an effective design. On the basis of the review results, this review suggests design possibilities and future research directions on the interface design of safety-related automotive HUD systems. In Study 3, automotive HUD-based take-over request (TOR) displays were developed and evaluated in terms of drivers take-over performance and visual scanning behavior in a highly automated driving situation. Four different types of TOR displays were comparatively evaluated through a driving simulator study - they were: Baseline (an auditory beeping alert), Mini-map, Arrow, and Mini-map-and-Arrow. Baseline simply alerts an imminent take-over, and was always included when the other three displays were provided. Mini-map provides situational information. Arrow presents the action direction information for the take-over. Mini-map-and-Arrow provides the action direction together with the relevant situational information. This study also investigated the relationship between drivers initial trust in the TOR displays and take-over and visual scanning behavior. The results indicated that providing a combination of machine-made decision and situational information, such as Mini-map-and-Arrow, yielded the best results overall in the take-over scenario. Also, drivers initial trust in the TOR displays was found to have significant associations with the take-over and visual behavior of drivers. The higher trust group primarily relied on the proposed TOR displays, while the lower trust group tended to more check the situational information through the traditional displays, such as side-view or rear-view mirrors. In Study 4, the effect of interactive HUD imagery location on driving and secondary task performance, driver distraction, preference, and workload associated with use of scrolling list while driving were investigated. A total of nine HUD imagery locations of full-windshield were examined through a driving simulator study. The results indicated the HUD imagery location affected all the dependent measures, that is, driving and task performance, drivers visual distraction, preference and workload. Considering both objective and subjective evaluations, interactive HUDs should be placed near the driver's line of sight, especially near the left-bottom on the windshield.์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด๋Š” ์ฐจ๋‚ด ๋””์Šคํ”Œ๋ ˆ์ด ์ค‘ ํ•˜๋‚˜๋กœ ์šด์ „์ž์—๊ฒŒ ํ•„์š”ํ•œ ์ •๋ณด๋ฅผ ์ „๋ฐฉ์— ํ‘œ์‹œํ•จ์œผ๋กœ์จ, ์šด์ „์ž๊ฐ€ ์šด์ „์„ ํ•˜๋Š” ๋™์•ˆ ์ „๋ฐฉ์œผ๋กœ ์‹œ์„ ์„ ์œ ์ง€ํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋„์™€์ค€๋‹ค. ์ด๋ฅผ ํ†ตํ•ด ์šด์ „์ž์˜ ์ฃผ์˜ ๋ถ„์‚ฐ์„ ์ค„์ด๊ณ , ์•ˆ์ „์„ ํ–ฅ์ƒ์‹œํ‚ค๋Š”๋ฐ ๋„์›€์ด ๋  ์ˆ˜ ์žˆ๋‹ค. ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด ์‹œ์Šคํ…œ์€ ์•ฝ 30๋…„ ์ „ ์šด์ „์ž์˜ ์•ˆ์ „์„ ํ–ฅ์ƒ์‹œํ‚ค๊ธฐ ์œ„ํ•œ ์ˆ˜๋‹จ์œผ๋กœ ์ž๋™์ฐจ ์‚ฐ์—…์— ์ฒ˜์Œ ๋„์ž…๋œ ์ด๋ž˜๋กœ ํ˜„์žฌ๊นŒ์ง€ ๋‹ค์–‘ํ•œ ์ƒ์šฉ์ฐจ์—์„œ ์‚ฌ์šฉ๋˜๊ณ  ์žˆ๋‹ค. ์•ˆ์ „๊ณผ ํŽธ์˜ ์ธก๋ฉด์—์„œ ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์‚ฌ์šฉ์€ ์ ์  ๋” ์ฆ๊ฐ€ํ•  ๊ฒƒ์œผ๋กœ ์˜ˆ์ƒ๋œ๋‹ค. ๊ทธ๋Ÿฌ๋‚˜ ์ด๋Ÿฌํ•œ ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์ž ์žฌ์  ์ด์ ๊ณผ ๋ฐœ์ „ ๊ฐ€๋Šฅ์„ฑ์—๋„ ๋ถˆ๊ตฌํ•˜๊ณ , ์œ ์šฉํ•œ ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด๋ฅผ ์„ค๊ณ„ํ•˜๋Š” ๊ฒƒ์€ ์—ฌ์ „ํžˆ ์–ด๋ ค์šด ๋ฌธ์ œ์ด๋‹ค. ์ด์— ๋ณธ ์—ฐ๊ตฌ๋Š” ์ด๋Ÿฌํ•œ ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ณ , ๊ถ๊ทน์ ์œผ๋กœ ์œ ์šฉํ•œ ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด ์„ค๊ณ„์— ๊ธฐ์—ฌํ•˜๊ณ ์ž ์ด 4๊ฐ€์ง€ ์—ฐ๊ตฌ๋ฅผ ์ˆ˜ํ–‰ํ•˜์˜€๋‹ค. ์ฒซ ๋ฒˆ์งธ ์—ฐ๊ตฌ๋Š” ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ๊ธฐ๋Šฅ ์š”๊ตฌ ์‚ฌํ•ญ๊ณผ ๊ด€๋ จ๋œ ๊ฒƒ์œผ๋กœ์„œ, ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด ์‹œ์Šคํ…œ์„ ํ†ตํ•ด ์–ด๋–ค ์ •๋ณด๋ฅผ ์ œ๊ณตํ•  ๊ฒƒ์ธ๊ฐ€์— ๋Œ€ํ•œ ๋‹ต์„ ๊ตฌํ•˜๊ณ ์ž ํ•˜์˜€๋‹ค. ์ด์— ์ฃผ์š” ์ž๋™์ฐจ ์ œ์กฐ์—…์ฒด๋“ค์˜ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด ์ œํ’ˆ๋“ค๊ณผ, ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ๋‹ค์–‘ํ•œ ๊ธฐ๋Šฅ๋“ค์„ ์ œ์•ˆํ•œ ํ•™์ˆ  ์—ฐ๊ตฌ, ๊ทธ๋ฆฌ๊ณ  ์šด์ „์ž์˜ ์ •๋ณด ์š”๊ตฌ ์‚ฌํ•ญ๋“ค์„ ์ฒด๊ณ„์  ๋ฌธํ—Œ ๊ณ ์ฐฐ ๋ฐฉ๋ฒ•๋ก ์„ ํ†ตํ•ด ํฌ๊ด„์ ์œผ๋กœ ์กฐ์‚ฌํ•˜์˜€๋‹ค. ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ๊ธฐ๋Šฅ์  ์š”๊ตฌ ์‚ฌํ•ญ์— ๋Œ€ํ•˜์—ฌ ๊ฐœ๋ฐœ์ž, ์—ฐ๊ตฌ์ž, ์‚ฌ์šฉ์ž ์ธก๋ฉด์„ ๋ชจ๋‘ ๊ณ ๋ คํ•œ ํ†ตํ•ฉ๋œ ์ง€์‹์„ ์ „๋‹ฌํ•˜๊ณ , ์ด๋ฅผ ํ†ตํ•ด ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ๊ธฐ๋Šฅ ์š”๊ตฌ ์‚ฌํ•ญ์— ๋Œ€ํ•œ ํ–ฅํ›„ ์—ฐ๊ตฌ ๋ฐฉํ–ฅ์„ ์ œ์‹œํ•˜์˜€๋‹ค. ๋‘ ๋ฒˆ์งธ ์—ฐ๊ตฌ๋Š” ์•ˆ์ „ ๊ด€๋ จ ์ •๋ณด๋ฅผ ์ œ๊ณตํ•˜๋Š” ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์ธํ„ฐํŽ˜์ด์Šค ์„ค๊ณ„์™€ ๊ด€๋ จ๋œ ๊ฒƒ์œผ๋กœ, ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด ์‹œ์Šคํ…œ์„ ํ†ตํ•ด ์•ˆ์ „ ๊ด€๋ จ ์ •๋ณด๋ฅผ ์–ด๋–ป๊ฒŒ ์ œ๊ณตํ•  ๊ฒƒ์ธ๊ฐ€์— ๋Œ€ํ•œ ๋‹ต์„ ๊ตฌํ•˜๊ณ ์ž ํ•˜์˜€๋‹ค. ์‹ค์ œ ์ž๋™์ฐจ๋“ค์˜ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด ์‹œ์Šคํ…œ์—์„œ๋Š” ์–ด๋–ค ๋””์Šคํ”Œ๋ ˆ์ด ์ปจ์…‰๋“ค์ด ์‚ฌ์šฉ๋˜์—ˆ๋Š”์ง€, ๊ทธ๋ฆฌ๊ณ  ํ•™๊ณ„์—์„œ ์ œ์•ˆ๋œ ๋””์Šคํ”Œ๋ ˆ์ด ์ปจ์…‰๋“ค์—๋Š” ์–ด๋–ค ๊ฒƒ๋“ค์ด ์žˆ๋Š”์ง€ ์ฒด๊ณ„์  ๋ฌธํ—Œ ๊ณ ์ฐฐ ๋ฐฉ๋ฒ•๋ก ์„ ํ†ตํ•ด ๊ฒ€ํ† ํ•˜์˜€๋‹ค. ๊ฒ€ํ† ๋œ ๊ฒฐ๊ณผ๋Š” ๊ฐ ๋””์Šคํ”Œ๋ ˆ์ด์˜ ๊ธฐ๋Šฅ๊ณผ ๊ตฌ์กฐ, ๊ทธ๋ฆฌ๊ณ  ์ž‘๋™ ๋ฐฉ์‹์— ๋”ฐ๋ผ ์ •๋ฆฌ๋˜์—ˆ๊ณ , ๊ด€๋ จ๋œ ์ธ๊ฐ„๊ณตํ•™์  ๋””์Šคํ”Œ๋ ˆ์ด ์„ค๊ณ„ ์›์น™๊ณผ ์‹คํ—˜์  ์—ฐ๊ตฌ ๊ฒฐ๊ณผ๋“ค์„ ํ•จ๊ป˜ ๊ฒ€ํ† ํ•˜์˜€๋‹ค. ๊ฒ€ํ† ๋œ ๊ฒฐ๊ณผ๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ์•ˆ์ „ ๊ด€๋ จ ์ •๋ณด๋ฅผ ์ œ๊ณตํ•˜๋Š” ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์ธํ„ฐํŽ˜์ด์Šค ์„ค๊ณ„์— ๋Œ€ํ•œ ํ–ฅํ›„ ์—ฐ๊ตฌ ๋ฐฉํ–ฅ์„ ์ œ์‹œํ•˜์˜€๋‹ค. ์„ธ ๋ฒˆ์งธ ์—ฐ๊ตฌ๋Š” ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด ๊ธฐ๋ฐ˜์˜ ์ œ์–ด๊ถŒ ์ „ํ™˜ ๊ด€๋ จ ์ธํ„ฐํŽ˜์ด์Šค ์„ค๊ณ„์™€ ํ‰๊ฐ€์— ๊ด€ํ•œ ๊ฒƒ์ด๋‹ค. ์ œ์–ด๊ถŒ ์ „ํ™˜์ด๋ž€, ์ž์œจ์ฃผํ–‰ ์ƒํƒœ์—์„œ ์šด์ „์ž๊ฐ€ ์ง์ ‘ ์šด์ „์„ ํ•˜๋Š” ์ˆ˜๋™ ์šด์ „ ์ƒํƒœ๋กœ ์ „ํ™˜์ด ๋˜๋Š” ๊ฒƒ์„ ์˜๋ฏธํ•œ๋‹ค. ๋”ฐ๋ผ์„œ ๊ฐ‘์ž‘์Šค๋Ÿฐ ์ œ์–ด๊ถŒ ์ „ํ™˜ ์š”์ฒญ์ด ๋ฐœ์ƒํ•˜๋Š” ๊ฒฝ์šฐ, ์šด์ „์ž๊ฐ€ ์•ˆ์ „ํ•˜๊ฒŒ ๋Œ€์ฒ˜ํ•˜๊ธฐ ์œ„ํ•ด์„œ๋Š” ๋น ๋ฅธ ์ƒํ™ฉ ํŒŒ์•…๊ณผ ์˜์‚ฌ ๊ฒฐ์ •์ด ํ•„์š”ํ•˜๊ฒŒ ๋˜๊ณ , ์ด๋ฅผ ํšจ๊ณผ์ ์œผ๋กœ ๋„์™€์ฃผ๊ธฐ ์œ„ํ•œ ์ธํ„ฐํŽ˜์ด์Šค ์„ค๊ณ„์— ๋Œ€ํ•ด ์—ฐ๊ตฌํ•  ํ•„์š”์„ฑ์ด ์žˆ๋‹ค. ์ด์— ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ์ž๋™์ฐจ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด ๊ธฐ๋ฐ˜์˜ ์ด 4๊ฐœ์˜ ์ œ์–ด๊ถŒ ์ „ํ™˜ ๊ด€๋ จ ๋””์Šคํ”Œ๋ ˆ์ด(๊ธฐ์ค€ ๋””์Šคํ”Œ๋ ˆ์ด, ๋ฏธ๋‹ˆ๋งต ๋””์Šคํ”Œ๋ ˆ์ด, ํ™”์‚ดํ‘œ ๋””์Šคํ”Œ๋ ˆ์ด, ๋ฏธ๋‹ˆ๋งต๊ณผ ํ™”์‚ดํ‘œ ๋””์Šคํ”Œ๋ ˆ์ด)๋ฅผ ์ œ์•ˆํ•˜์˜€๊ณ , ์ œ์•ˆ๋œ ๋””์Šคํ”Œ๋ ˆ์ด ๋Œ€์•ˆ๋“ค์€ ์ฃผํ–‰ ์‹œ๋ฎฌ๋ ˆ์ดํ„ฐ ์‹คํ—˜์„ ํ†ตํ•ด ์ œ์–ด๊ถŒ ์ „ํ™˜ ์ˆ˜ํ–‰ ๋Šฅ๋ ฅ๊ณผ ์•ˆ๊ตฌ์˜ ์›€์ง์ž„ ํŒจํ„ด, ๊ทธ๋ฆฌ๊ณ  ์‚ฌ์šฉ์ž์˜ ์ฃผ๊ด€์  ํ‰๊ฐ€ ์ธก๋ฉด์—์„œ ํ‰๊ฐ€๋˜์—ˆ๋‹ค. ๋˜ํ•œ ์ œ์•ˆ๋œ ๋””์Šคํ”Œ๋ ˆ์ด ๋Œ€์•ˆ๋“ค์— ๋Œ€ํ•ด ์šด์ „์ž๋“ค์˜ ์ดˆ๊ธฐ ์‹ ๋ขฐ๋„ ๊ฐ’์„ ์ธก์ •ํ•˜์—ฌ ๊ฐ ๋””์Šคํ”Œ๋ ˆ์ด์— ๋”ฐ๋ฅธ ์šด์ „์ž๋“ค์˜ ํ‰๊ท  ์‹ ๋ขฐ๋„ ์ ์ˆ˜์— ๋”ฐ๋ผ ์ œ์–ด๊ถŒ ์ „ํ™˜ ์ˆ˜ํ–‰ ๋Šฅ๋ ฅ๊ณผ ์•ˆ๊ตฌ์˜ ์›€์ง์ž„ ํŒจํ„ด, ๊ทธ๋ฆฌ๊ณ  ์ฃผ๊ด€์  ํ‰๊ฐ€๊ฐ€ ์–ด๋–ป๊ฒŒ ๋‹ฌ๋ผ์ง€๋Š”์ง€ ๋ถ„์„ํ•˜์˜€๋‹ค. ์‹คํ—˜ ๊ฒฐ๊ณผ, ์ œ์–ด๊ถŒ ์ „ํ™˜ ์ƒํ™ฉ์—์„œ ์ž๋™ํ™”๋œ ์‹œ์Šคํ…œ์ด ์ œ์•ˆํ•˜๋Š” ์ •๋ณด์™€ ๊ทธ์™€ ๊ด€๋ จ๋œ ์ฃผ๋ณ€ ์ƒํ™ฉ ์ •๋ณด๋ฅผ ํ•จ๊ป˜ ์ œ์‹œํ•ด ์ฃผ๋Š” ๋””์Šคํ”Œ๋ ˆ์ด๊ฐ€ ๊ฐ€์žฅ ์ข‹์€ ๊ฒฐ๊ณผ๋ฅผ ๋ณด์—ฌ์ฃผ์—ˆ๋‹ค. ๋˜ํ•œ ๊ฐ ๋””์Šคํ”Œ๋ ˆ์ด์— ๋Œ€ํ•œ ์šด์ „์ž์˜ ์ดˆ๊ธฐ ์‹ ๋ขฐ๋„ ์ ์ˆ˜๋Š” ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์‹ค์ œ ์‚ฌ์šฉ ํ–‰ํƒœ์™€ ๋ฐ€์ ‘ํ•œ ๊ด€๋ จ์ด ์žˆ์Œ์„ ์•Œ ์ˆ˜ ์žˆ์—ˆ๋‹ค. ์‹ ๋ขฐ๋„ ์ ์ˆ˜์— ๋”ฐ๋ผ ์‹ ๋ขฐ๋„๊ฐ€ ๋†’์€ ๊ทธ๋ฃน๊ณผ ๋‚ฎ์€ ๊ทธ๋ฃน์œผ๋กœ ๋ถ„๋ฅ˜๋˜์—ˆ๊ณ , ์‹ ๋ขฐ๋„๊ฐ€ ๋†’์€ ๊ทธ๋ฃน์€ ์ œ์•ˆ๋œ ๋””์Šคํ”Œ๋ ˆ์ด๋“ค์ด ๋ณด์—ฌ์ฃผ๋Š” ์ •๋ณด๋ฅผ ์ฃผ๋กœ ๋ฏฟ๊ณ  ๋”ฐ๋ฅด๋Š” ๊ฒฝํ–ฅ์ด ์žˆ์—ˆ๋˜ ๋ฐ˜๋ฉด, ์‹ ๋ขฐ๋„๊ฐ€ ๋‚ฎ์€ ๊ทธ๋ฃน์€ ๋ฃธ ๋ฏธ๋Ÿฌ๋‚˜ ์‚ฌ์ด๋“œ ๋ฏธ๋Ÿฌ๋ฅผ ํ†ตํ•ด ์ฃผ๋ณ€ ์ƒํ™ฉ ์ •๋ณด๋ฅผ ๋” ํ™•์ธ ํ•˜๋Š” ๊ฒฝํ–ฅ์„ ๋ณด์˜€๋‹ค. ๋„ค ๋ฒˆ์งธ ์—ฐ๊ตฌ๋Š” ์ „๋ฉด ์œ ๋ฆฌ์ฐฝ์—์„œ์˜ ์ธํ„ฐ๋ž™ํ‹ฐ๋ธŒ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์ตœ์  ์œ„์น˜๋ฅผ ๊ฒฐ์ •ํ•˜๋Š” ๊ฒƒ์œผ๋กœ์„œ ์ฃผํ–‰ ์‹œ๋ฎฌ๋ ˆ์ดํ„ฐ ์‹คํ—˜์„ ํ†ตํ•ด ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์œ„์น˜์— ๋”ฐ๋ผ ์šด์ „์ž์˜ ์ฃผํ–‰ ์ˆ˜ํ–‰ ๋Šฅ๋ ฅ, ์ธํ„ฐ๋ž™ํ‹ฐ๋ธŒ ๋””์Šคํ”Œ๋ ˆ์ด ์กฐ์ž‘ ๊ด€๋ จ ๊ณผ์—… ์ˆ˜ํ–‰ ๋Šฅ๋ ฅ, ์‹œ๊ฐ์  ์ฃผ์˜ ๋ถ„์‚ฐ, ์„ ํ˜ธ๋„, ๊ทธ๋ฆฌ๊ณ  ์ž‘์—… ๋ถ€ํ•˜๊ฐ€ ํ‰๊ฐ€๋˜์—ˆ๋‹ค. ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์œ„์น˜๋Š” ์ „๋ฉด ์œ ๋ฆฌ์ฐฝ์—์„œ ์ผ์ •ํ•œ ๊ฐ„๊ฒฉ์œผ๋กœ ์ด 9๊ฐœ์˜ ์œ„์น˜๊ฐ€ ๊ณ ๋ ค๋˜์—ˆ๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ ํ™œ์šฉ๋œ ์ธํ„ฐ๋ž™ํ‹ฐ๋ธŒ ๋””์Šคํ”Œ๋ ˆ์ด๋Š” ์Œ์•… ์„ ํƒ์„ ์œ„ํ•œ ์Šคํฌ๋กค ๋ฐฉ์‹์˜ ๋‹จ์ผ ๋””์Šคํ”Œ๋ ˆ์ด์˜€๊ณ , ์šด์ „๋Œ€์— ์žฅ์ฐฉ๋œ ๋ฒ„ํŠผ์„ ํ†ตํ•ด ๋””์Šคํ”Œ๋ ˆ์ด๋ฅผ ์กฐ์ž‘ํ•˜์˜€๋‹ค. ์‹คํ—˜ ๊ฒฐ๊ณผ, ์ธํ„ฐ๋ž™ํ‹ฐ๋ธŒ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์œ„์น˜๊ฐ€ ๋ชจ๋“  ํ‰๊ฐ€ ์ฒ™๋„, ์ฆ‰ ์ฃผํ–‰ ์ˆ˜ํ–‰ ๋Šฅ๋ ฅ, ๋””์Šคํ”Œ๋ ˆ์ด ์กฐ์ž‘ ๊ณผ์—… ์ˆ˜ํ–‰ ๋Šฅ๋ ฅ, ์‹œ๊ฐ์  ์ฃผ์˜ ๋ถ„์‚ฐ, ์„ ํ˜ธ๋„, ๊ทธ๋ฆฌ๊ณ  ์ž‘์—… ๋ถ€ํ•˜์— ์˜ํ–ฅ์„ ๋ฏธ์นจ์„ ์•Œ ์ˆ˜ ์žˆ์—ˆ๋‹ค. ๋ชจ๋“  ํ‰๊ฐ€ ์ง€ํ‘œ๋ฅผ ๊ณ ๋ คํ–ˆ์„ ๋•Œ, ์ธํ„ฐ๋ž™ํ‹ฐ๋ธŒ ํ—ค๋“œ์—… ๋””์Šคํ”Œ๋ ˆ์ด์˜ ์œ„์น˜๋Š” ์šด์ „์ž๊ฐ€ ๋˜‘๋ฐ”๋กœ ์ „๋ฐฉ์„ ๋ฐ”๋ผ๋ณผ ๋•Œ์˜ ์‹œ์•ผ ๊ตฌ๊ฐ„, ์ฆ‰ ์ „๋ฉด ์œ ๋ฆฌ์ฐฝ์—์„œ์˜ ์™ผ์ชฝ ์•„๋ž˜ ๋ถ€๊ทผ์ด ๊ฐ€์žฅ ์ตœ์ ์ธ ๊ฒƒ์œผ๋กœ ๋‚˜ํƒ€๋‚ฌ๋‹ค.Abstract i Contents v List of Tables ix List of Figures x Chapter 1 Introduction 1 1.1 Research Background 1 1.2 Research Objectives and Questions 8 1.3 Structure of the Thesis 11 Chapter 2 Functional Requirements of Automotive Head-Up Displays: A Systematic Review of Literature from 1994 to Present 13 2.1 Introduction 13 2.2 Method 15 2.3 Results 17 2.3.1 Information Types Displayed by Existing Commercial Automotive HUD Systems 17 2.3.2 Information Types Previously Suggested for Automotive HUDs by Research Studies 28 2.3.3 Information Types Required by Drivers (users) for Automotive HUDs and Their Relative Importance 35 2.4 Discussion 39 2.4.1 Information Types Displayed by Existing Commercial Automotive HUD Systems 39 2.4.2 Information Types Previously Suggested for Automotive HUDs by Research Studies 44 2.4.3 Information Types Required by Drivers (users) for Automotive HUDs and Their Relative Importance 48 Chapter 3 A Literature Review on Interface Design of Automotive Head-Up Displays for Communicating Safety-Related Information 50 3.1 Introduction 50 3.2 Method 52 3.3 Results 55 3.3.1 Commercial Automotive HUDs Presenting Safety-Related Information 55 3.3.2 Safety-Related HUDs Proposed by Academic Research 58 3.4 Discussion 74 Chapter 4 Development and Evaluation of Automotive Head-Up Displays for Take-Over Requests (TORs) in Highly Automated Vehicles 78 4.1 Introduction 78 4.2 Method 82 4.2.1 Participants 82 4.2.2 Apparatus 82 4.2.3 Automotive HUD-based TOR Displays 83 4.2.4 Driving Scenario 86 4.2.5 Experimental Design and Procedure 87 4.2.6 Experiment Variables 88 4.2.7 Statistical Analyses 91 4.3 Results 93 4.3.1 Comparison of the Proposed TOR Displays 93 4.3.2 Characteristics of Drivers Initial Trust in the four TOR Displays 102 4.3.3 Relationship between Drivers Initial Trust and Take-over and Visual Behavior 104 4.4 Discussion 113 4.4.1 Comparison of the Proposed TOR Displays 113 4.4.2 Characteristics of Drivers Initial Trust in the four TOR Displays 116 4.4.3 Relationship between Drivers Initial Trust and Take-over and Visual Behavior 117 4.5 Conclusion 119 Chapter 5 Human Factors Evaluation of Display Locations of an Interactive Scrolling List in a Full-windshield Automotive Head-Up Display System 121 5.1 Introduction 121 5.2 Method 122 5.2.1 Participants 122 5.2.2 Apparatus 123 5.2.3 Experimental Tasks and Driving Scenario 123 5.2.4 Experiment Variables 124 5.2.5 Experimental Design and Procedure 126 5.2.6 Statistical Analyses 126 5.3 Results 127 5.4 Discussion 133 5.5 Conclusion 135 Chapter 6 Conclusion 137 6.1 Summary and Implications 137 6.2 Future Research Directions 139 Bibliography 143 Apeendix A. Display Layouts of Some Commercial HUD Systems Appendix B. Safety-related Displays Provided by the Existing Commercial HUD Systems Appendix C. Safety-related HUD displays Proposed by Academic Research ๊ตญ๋ฌธ์ดˆ๋ก 187Docto

    Dream360: Diverse and Immersive Outdoor Virtual Scene Creation via Transformer-Based 360 Image Outpainting

    Full text link
    360 images, with a field-of-view (FoV) of 180x360, provide immersive and realistic environments for emerging virtual reality (VR) applications, such as virtual tourism, where users desire to create diverse panoramic scenes from a narrow FoV photo they take from a viewpoint via portable devices. It thus brings us to a technical challenge: `How to allow the users to freely create diverse and immersive virtual scenes from a narrow FoV image with a specified viewport?' To this end, we propose a transformer-based 360 image outpainting framework called Dream360, which can generate diverse, high-fidelity, and high-resolution panoramas from user-selected viewports, considering the spherical properties of 360 images. Compared with existing methods, e.g., [3], which primarily focus on inputs with rectangular masks and central locations while overlooking the spherical property of 360 images, our Dream360 offers higher outpainting flexibility and fidelity based on the spherical representation. Dream360 comprises two key learning stages: (I) codebook-based panorama outpainting via Spherical-VQGAN (S-VQGAN), and (II) frequency-aware refinement with a novel frequency-aware consistency loss. Specifically, S-VQGAN learns a sphere-specific codebook from spherical harmonic (SH) values, providing a better representation of spherical data distribution for scene modeling. The frequency-aware refinement matches the resolution and further improves the semantic consistency and visual fidelity of the generated results. Our Dream360 achieves significantly lower Frechet Inception Distance (FID) scores and better visual fidelity than existing methods. We also conducted a user study involving 15 participants to interactively evaluate the quality of the generated results in VR, demonstrating the flexibility and superiority of our Dream360 framework.Comment: 11 pages, accepted to IEEE VR 202

    View generation for three-dimensional scenes from video sequences

    Full text link
    • โ€ฆ
    corecore