1,116 research outputs found

    μ‚¬μš©μž 생성 μ½˜ν…μΈ  μ›Ή μ‚¬μ΄νŠΈμ˜ λ™μ˜μƒ ν™•μ‚°-Bilibili기반으둜 싀증 뢄석

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(석사)--μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› :κ³΅κ³ΌλŒ€ν•™ ν˜‘λ™κ³Όμ • κΈ°μˆ κ²½μ˜Β·κ²½μ œΒ·μ •μ±…μ „κ³΅,2019. 8. 황쀀석.User-generated content emphasis user value, and based on the development of Web 2.0, it takes an important position whatever in information diffusion and market management part. Therefore, it is necessary to identify which factors would give influence to information diffusion in UGC. This thesis based on a video UGC website – Bilibili, try to find influential factors during the video diffusion process. Different from previous research which mainly focuses on the social network, this thesis mainly used video characteristic data to explore effective factors to video diffusion. First, after the draw the number of views increase trend within one month, views trend in Bilibili indicates that video diffusion showed a different diffusion curve. Then through careful analysis of each different diffusion periods, this thesis found the influential factors are different during different diffusion periods. The analysis result showed that higher interactive among users and contents could attract more people to watch the video which improves video diffusion rate, and also showed the impacts of general comment below the video and sharing activity, video quality to video diffusion. Based on this thesis, some marginal implications also introduced such as it could provide some basis for web designers, people who use UGC as a marketing tool and users who want to be a UGC content producer.μ‚¬μš©μž 생성 μ½˜ν…μΈ  (UGC) μ‚¬μš©μžμ˜ κ°€μΉ˜λ₯Ό κ°•μ‘°ν•˜λ©° Web 2.0의 κ°œλ°œμ„ 기반으둜 정보 ν™•μ‚° 및 μ‹œμž₯ 관리에 μ€‘μš”ν•œ 역할을 κ°€μ§€κ³ μžˆλ‹€. λ”°λΌμ„œ UGCμ›Ή μ‚¬μ΄νŠΈμ—μ„œ μ–΄λ–€ μš”μ†Œκ°€ 정보 확산에 영ν–₯을 λ―ΈμΉ  것인지λ₯Ό μ‹λ³„ν•˜λŠ” 것이 ν•„μš”ν•˜λ‹€. 이 논문은 Bilibiliμ›Ή μ‚¬μ΄νŠΈ 기반으둜, λΉ„λ””μ˜€ ν™•μ‚° κ³Όμ •μ—μ„œ 영ν–₯λ ₯μžˆλŠ” μš”μΈμ„ μ°Ύμ•„ λ³΄λ €κ³ ν•œλ‹€. 주둜 μ†Œμ…œ λ„€νŠΈμ›Œν¬μ— μ΄ˆμ μ„ 맞좘 μ„ ν–‰ λ¬Έν—Œμ™€λŠ” 달리, 이 λ…Όλ¬Έμ—μ„œλŠ” 주둜 λΉ„λ””μ˜€ νŠΉμ„± 데이터λ₯Ό μ‚¬μš©ν•˜μ—¬ λΉ„λ””μ˜€ 확산에 효과적인 μš”μ†Œλ₯Ό νƒκ΅¬ν–ˆλ‹€. 첫째, 1 κ°œμ›” 이내에 λΉ„λ””μ˜€ 쑰회수 증가 μΆ”μ„Έ νŒŒμ•…ν–ˆκ³  λΉ„λ””μ˜€ ν™•μ‚°μ—λŠ” μ„Έ 가지 단계가 μžˆλ‹€. λ‘˜μ§Έ, 각 λ‹¨κ³„μ—μ„œ λΉ„λ””μ˜€ 확산에 λ―ΈμΉ˜λŠ” μš”μΈμ΄ 무엇인지 λΆ„μ„ν–ˆλ‹€. 뢄석결과 보면 μ‚¬μš©μžμ™€ μ½˜ν…μΈ μ˜ μƒν˜Έ μž‘μš©μ΄ λ†’μ„μˆ˜λ‘ 더 λ§Žμ€ μ‚¬λžŒλ“€μ΄ λ™μ˜μƒμ„ λ³Ό 수 μžˆμ–΄ λ™μ˜μƒ ν™•μ‚° 속도가 ν–₯상 될 수 μžˆμŒμ„ λ³΄μ—¬μ£Όμ—ˆλ‹€. 이 논문을 λ°”νƒ•μœΌλ‘œ μ›Ή λ””μžμ΄λ„ˆ, UGCλ₯Ό λ§ˆμΌ€νŒ… λ„κ΅¬λ‘œ μ‚¬μš©ν•˜λŠ” μ‚¬λžŒλ“€ 및 UGC μ½˜ν…μΈ  μ œμž‘μžκ°€λ˜κΈ°λ₯Ό μ›ν•˜λŠ” μ‚¬μš©μžμ—κ²Œ 기본적인 κ·Όκ±°λ₯Ό 제곡 ν•  수 μžˆλ‹€λŠ” 점과 같은 λͺ‡ 가지 μ€‘μš”ν•œ ν•¨μ˜κ°€ μ†Œκ°œλ˜μ—ˆλ‹€.Table of content: 1. Introduction 1 2. Literature review 4 2.1 UGC and Bilibili. 4 2.2 Video diffusion in UGC 6 2.3 Hypotheses 9 3 Data and Methodology. 12 3.1 Data 12 3.2 Methodology. 16 4. Analysis results and Conclusion. 19 4.1 Analysis result. 19 4.2 Conclusion 21 4.2.1 Conclusion of daily views. 21 4.2.2 Conclusion of video attributes. 22 5. Implications and limitations 26 5.1 Implications 26 5.2 Limitations. 28 Reference: 29 Appendix: 37 ꡭ문초둝 43Maste

    μ•ˆμ •μ μΈ, λΉ„λ””μ˜€ 기반의 곡정 이상 탐지

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(석사) -- μ„œμšΈλŒ€ν•™κ΅λŒ€ν•™μ› : κ³΅κ³ΌλŒ€ν•™ 기계곡학뢀, 2021.8. λ°•μ’…μš°.Industrial video anomaly detection is an important problem in industrial inspection, possessing features that are distinct from video anomaly detection in other application domains like surveillance. No public datasets pertinent to the problem have been developed, and accordingly, robust models suited for industrial video anomaly detection have yet to be developed. In this thesis, the key differences that distinguish the industrial video anomaly detection problem from its generic counterparts are examined: the relatively small amount of video data available, the lack of diversity among frames within the video clips, and the absence of labels that indicate anomalies. We then propose a robust framework for industrial video inspection that addresses these specific challenges. One novel aspect of our framework includes a model that masks regions in frames that are irrelevant to the inspection task. We show that our framework outperforms existing methods when validated on a novel database that replicates video clips of real-world automated tasks.곡정 κ²€μ‚¬λΌλŠ” λ‹€μ†Œ λ°©λŒ€ν•œ λΆ„μ•Όμ˜ μ—¬λŸ¬ 문제 μ€‘μ—μ„œ, μ‚°μ—…μš© λΉ„λ””μ˜€ 이상 νƒμ§€λŠ” 큰 μ€‘μš”μ„±μ„ μ§€λ‹Œ λ¬Έμ œμ΄μ§€λ§Œ, κ·Έ μ€‘μš”μ„±μ— λΉ„ν•΄ μΆ©λΆ„νžˆ μ£Όλͺ©μ„ 받지 λͺ»ν•˜κ³  μžˆλ‹€. 이 문제λ₯Ό 연ꡬ할 λ•Œ μ‚¬μš©ν•  곡적인 데이터셋이 λΆ€μž¬ν•˜λ©°, 이λ₯Ό 기반으둜 κ³ μ•ˆλœ μ‚°μ—…μš© λΉ„λ””μ˜€ 이상 탐지에 νŠΉν™”λœ 기법에 λŒ€ν•œ μ„ ν–‰ 연ꡬ도 진행 된 적이 μ—†μ—ˆλ‹€. λ³Έ λ…Όλ¬Έμ—μ„œλŠ” 일반적인 λΉ„λ””μ˜€ 이상 탐지 λ¬Έμ œμ™€ μ‚°μ—…μš© λΉ„λ””μ˜€ 이상 탐지 문제의 μƒμ΄ν•œ νŠΉμ„±λ“€μ„ λΆ„μ„ν•˜μ—¬ 규λͺ…ν•˜μ˜€λ‹€. 일반적인 λΉ„λ””μ˜€ 이상 νƒμ§€μ—μ„œμ™€ 달리, μ‚°μ—…μš© λΉ„λ””μ˜€ 이상탐지 λ¬Έμ œμ—μ„œλŠ” μ‚¬μš© κ°€λŠ₯ν•œ λ°μ΄ν„°μ˜ 양이 ν•œμ •λ˜μ–΄ 있으며, ν•™μŠ΅μ— ν•„μš”ν•œ 라벨이 μ—†κΈ° λ•Œλ¬Έμ— 이λ₯Ό ν™œμš©ν•œ λͺ¨λΈμ„ κ°œλ°œν•˜λŠ” 것이 λΆˆκ°€λŠ₯ν•˜λ‹€. 이와 같은 이유둜 인해, κΈ°μ‘΄ λͺ¨λΈμ„ μ‚°μ—…μš© λΉ„λ””μ˜€ 이상 탐지 λ¬Έμ œμ— μ μš©ν•  μ‹œ, κ²€μ‚¬ν•˜κ³ μž ν•˜λŠ” λ™μž‘κ³Ό λ¬΄κ΄€ν•œ μš”μ†Œμ˜ μΆœν˜„κ³Ό μ›€μ§μž„μœΌλ‘œ μΈν•œ 거짓 μ•ŒλžŒμ΄ μ§€λ‚˜μΉ˜κ²Œ 자주 λ°œμƒν•œλ‹€. 뢄석을 기반으둜, κ°•κ±΄ν•œ λΉ„λ””μ˜€ 이상 감지가 κ°€λŠ₯ν•œ μ‚°μ—…μš© λΉ„λ””μ˜€ 이상 탐지 λ°©μ•ˆμ„ κ³ μ•ˆν•˜μ˜€λ‹€. 이 κΈ°λ²•μ—μ„œλŠ” 이상 탐지λ₯Ό μœ„ν•œ λͺ¨λΈκ³Ό λ³„κ°œλ‘œ, μ˜μƒ λ‚΄μ˜ μš”μ†Œλ“€ 쀑 λ™μž‘ 감지와 상관 μ—†λŠ” 것듀을 κ°€λ¦¬λŠ” λͺ¨λΈμ„ ν™œμš©ν•œλ‹€. μ œμ•ˆν•˜κ³ μž ν•˜λŠ” λ°©μ•ˆμ˜ νš¨μš©μ„±μ„ κ²€μ¦ν•˜κΈ° μœ„ν•΄, μ‹€μ œ 곡정 μ˜μƒκ³Ό μœ μ‚¬ν•œ νŠΉμ„±λ“€μ„ λ³΄μ΄λŠ” λ‘œλ΄‡ λ™μž‘μ„ μ΄¬μ˜ν•΄ μˆ˜μ§‘ν•œ λ°μ΄ν„°λ² μ΄μŠ€λ₯Ό κ΅¬μΆ•ν•˜μ˜€μœΌλ©°, 이λ₯Ό ν™œμš©ν•΄ λͺ¨λΈμ˜ μ„±λŠ₯듀을 μΈ‘μ •ν•˜μ˜€λ‹€. λ³Έ μ—°κ΅¬μ—μ„œ μ œμ‹œν•˜λŠ” κ°•κ±΄ν•œ λΉ„λ””μ˜€ 이상탐지 λ°©μ•ˆκ³Ό 데이터 베이슀λ₯Ό 논문을 톡해 κ³΅κ°œν•¨μœΌλ‘œμ¨, 이 뢄야와 κ΄€λ ¨ν•œ 더 λ‹€μ–‘ν•œ 연ꡬλ₯Ό μ΄‰μ§„ν•˜λŠ”λ° κΈ°μ—¬ ν•  수 μžˆμ„ 것이라 κΈ°λŒ€ν•œλ‹€.1 Introduction 1 1.1 Related Works 8 1.2 Contributions of Our Work 15 1.3 Organization 16 2 Preliminaries 18 2.1 Weakly Supervised Learning 19 2.1.1 Supervised Learning and Unsupervised Learning 19 2.1.2 Demysti cation on Weakly Supervised Learning 21 2.2 Class Activation Maps 22 2.2.1 Overview on Visualizing Activations 22 2.2.2 Overview on CAM 24 2.2.3 Overview on Grad-CAM 25 2.2.4 Overview on Eigen-CAM 26 2.3 Dynamic Time Warping 27 2.4 Label Smoothing 28 2.4.1 Review on Cross Entropy Function 30 2.4.2 Summary on Label Smoothing 31 3 Robust Framework for Industrial Video Anomaly Detection 32 3.1 Components of the Framework 34 3.1.1 Anomaly Detection Model 34 3.1.2 Background Masking Model 34 3.1.3 Fusing Results from the Components of the Framework 37 3.2 Details of the Weakly Supervised Learning Method 37 3.2.1 Partition Order Prediction Task 37 3.2.2 Partition Order Labels 39 3.2.3 Conditioning the Labels 43 4 Experiments 45 4.1 Database for Industrial Video Anomaly Detection 46 4.2 Ideal Background Mask 48 4.2.1 Acquiring Ideal Masks 48 4.2.2 Enhancing Robustness in VAD Using Masks 48 4.3 Masking Using the Proposed Method vs Using an Ideal Mask 49 4.4 Performance Enhancement Using the Proposed Method 50 4.5 Ablation Study 54 4.5.1 Number of Layers for Eigen-CAM 54 4.5.2 Threshold on Attention Maps 54 4.5.3 Temporal Smoothing Window Size 56 5 Conclusion 57 A Appendix 59 A.1 Experimental Results for All Tasks in the Database 59 Bibliography 60 ꡭ문초둝 68석

    μ΄μ•ΌκΈ°ν˜• μ„€λͺ…문을 ν™œμš©ν•œ λŒ€κ·œλͺ¨ λΉ„λ””μ˜€ ν•™μŠ΅ 연ꡬ

    Get PDF
    ν•™μœ„λ…Όλ¬Έ (박사) -- μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› : κ³΅κ³ΌλŒ€ν•™ 컴퓨터곡학뢀, 2021. 2. 김건희.Extensive contributions are being made to develop intelligent agents that can recognize and communicate with the world. In this sense, various video-language tasks have drawn a lot of interests in computer vision research, including image/video captioning, video retrieval and video question answering. It can be applied to high-level computer vision tasks and various future industries such as search engines, social marketing, automated driving, and robotics support through QA / dialog generation for the surrounding environment. However, despite these developments, video-language learning suffers from a higher degree of complexity. This thesis investigates methodologies for learning the relationship between videos and free-formed languages, including explanations, conversations, and question-and-answers, so that the machine can easily adapt to target downstream tasks. First, we introduce several methods to learn the relationship between long sentences and videos efficiently. We introduce the approaches for supervising human attention transfer for the video attention model, which shows the video attention mechanism can benefit from explicit human gaze labels. Next, we introduce the end-to-end semantic attention method, which further reduces the visual attention algorithm's complexity by using the representative visual concept word detected by the attention-based detector. As a follow-up study on previous methods, we introduce a JSFusion (Joint Sequence Fusion) method that enables efficient video search and QA by enabling many-to-many matching of attention model. Next, we introduce the CiSIN(Character in Story Identification Network), which uses Attention to increase the performance of character grounding and character re-identification in the movie. Finally, we introduce Transitional Adaptation, which promotes the caption generation models to generates coherent narratives for long videos. In summary, this thesis presents a novel approaches for automatic video description generation/retrieval and shows the benefits of extracting linguistic knowledge for object and motion in the video as well as the advantage of multimodal audio-visual learning for understanding videos. Since the proposed methods are easily adapted to any video-language tasks, it is expected to be applied to the latest models, bringing additional performance improvements. Moving forward, we plan to design an unsupervised video learning framework that can solve many challenges in the industry by integrating an unlimited amount of video, audio, and free-formed language data from the web.μ‹œκ°-μ–Έμ–΄ ν•™μŠ΅μ€ 이미지/λΉ„λ””μ˜€ μΊ‘μ…˜(Image/Video captioning), μ‹œκ° μ§ˆμ˜μ‘λ‹΅(Visual Question and Answering), λΉ„λ””μ˜€ 검색(Video Retrieval), μž₯λ©΄ 이해(scene understanding), 이벀트 인식(event detection) λ“± κ³ μ°¨μ›μ˜ 컴퓨터 λΉ„μ „ νƒœμŠ€ν¬(task)뿐만 μ•„λ‹ˆλΌ μ£Όλ³€ ν™˜κ²½μ— λŒ€ν•œ 질의 응닡 및 λŒ€ν™” 생성(Dialogue Generation)으둜 인터넷 검색 뿐만 μ•„λ‹ˆλΌ 졜근 ν™œλ°œν•œ μ†Œμ…œ λ§ˆμΌ€νŒ…(Social Marketing) 자율 μ£Όν–‰(Automated Driving), λ‘œλ³΄ν‹±μŠ€(Robotics)을 λ³΄μ‘°ν•˜λŠ” λ“± μ—¬λŸ¬ 미래 산업에 적용될 수 μžˆμ–΄ ν™œλ°œνžˆ μ—°κ΅¬λ˜κ³  μžˆλŠ” μ€‘μš”ν•œ 뢄야이닀. 컴퓨터 λΉ„μ Όκ³Ό μžμ—°μ–΄ μ²˜λ¦¬λŠ” μ΄λŸ¬ν•œ μ€‘μš”μ„±μ„ λ°”νƒ•μœΌλ‘œ 각자 κ³ μœ ν•œ μ˜μ—­μ—μ„œ λ°œμ „μ„ κ±°λ“­ν•΄ μ™”μœΌλ‚˜, 졜근 λ”₯λŸ¬λ‹μ˜ λ“±μž₯κ³Ό ν•¨κ»˜ λˆˆλΆ€μ‹œκ²Œ λ°œμ „ν•˜λ©΄μ„œ μ„œλ‘œλ₯Ό λ³΄μ™„ν•˜λ©° ν•™μŠ΅ κ²°κ³Όλ₯Ό ν–₯μƒμ‹œν‚€λŠ” λ“± 큰 μ‹œλ„ˆμ§€ 효과λ₯Ό λ°œνœ˜ν•˜κ²Œ λ˜μ—ˆλ‹€. ν•˜μ§€λ§Œ 이런 λ°œμ „μ—λ„ λΆˆκ΅¬ν•˜κ³ , λΉ„λ””μ˜€-μ–Έμ–΄κ°„ ν•™μŠ΅μ€ 문제의 λ³΅μž‘λ„κ°€ ν•œμΈ΅ λ†’μ•„ 어렀움을 κ²ͺ게 λ˜λŠ” κ²½μš°κ°€ λ§Žλ‹€. λ³Έ λ…Όλ¬Έμ—μ„œλŠ” λΉ„λ””μ˜€μ™€ 이에 λŒ€μ‘ν•˜λŠ” μ„€λͺ…, λŒ€ν™”, 질의 응닡 λ“± 더 λ‚˜μ•„κ°€ 자유 ν˜•νƒœμ˜ μ–Έμ–΄ (Free-formed language)κ°„μ˜ 관계λ₯Ό λ”μš± 효율적으둜 ν•™μŠ΅ν•˜κ³ , λͺ©ν‘œ μž„λ¬΄μ— 잘 λŒ€μ‘ν•  수 μžˆλ„λ‘ κ°œμ„ ν•˜λŠ” 것을 λͺ©ν‘œλ‘œ ν•œλ‹€. λ¨Όμ €, μ‹œκ°μ  λ³΅μž‘λ„κ°€ 이미지보닀 높은 λΉ„λ””μ˜€μ™€ κΈ΄ λ¬Έμž₯ μ‚¬μ΄μ˜ 관계λ₯Ό 효율적으둜 ν•™μŠ΅ν•˜κΈ° μœ„ν•œ μ—¬λŸ¬ 방법듀을 μ†Œκ°œν•œλ‹€. μΈκ°„μ˜ 주의 인식(Attention) λͺ¨λΈμ„ λΉ„λ””μ˜€-μ–Έμ–΄ λͺ¨λΈμ— 지도 ν•™μŠ΅ ν•˜λŠ” 방법을 μ†Œκ°œν•˜κ³ , μ΄μ–΄μ„œ λΉ„λ””μ˜€μ—μ„œ μš°μ„  κ²€μΆœλœ λŒ€ν‘œ μ‹œκ° 단어λ₯Ό 맀개둜 ν•˜μ—¬ 주의 인식(Attention) μ•Œκ³ λ¦¬μ¦˜μ˜ λ³΅μž‘λ„λ₯Ό λ”μš± μ€„μ΄λŠ” 의미 쀑심 주의 인식 (Semantic Attention) 방법, μ–΄ν…μ…˜ λͺ¨λΈμ˜ λ‹€λŒ€λ‹€ 맀칭을 기반으둜 효율적인 λΉ„λ””μ˜€ 검색 및 μ§ˆμ˜μ‘λ‹΅μ„ κ°€λŠ₯μΌ€ ν•˜λŠ” λΉ„λ””μ˜€-μ–Έμ–΄κ°„ μœ΅ν•© (Joint Sequence Fusion) 방법 λ“± λΉ„λ””μ˜€ 주의 인식을 효율적으둜 ν•™μŠ΅μ‹œν‚¬ 수 μžˆλŠ” 방법듀을 μ œμ‹œν•œλ‹€. λ‹€μŒμœΌλ‘œλŠ”, 주의 인식(Attention) λͺ¨λΈμ΄ 물체-단어 κ°„ 관계λ₯Ό λ„˜μ–΄ λΉ„λ””μ˜€ μƒμ—μ„œ 인물 검색 (Person Searching) 그리고 인물 재 식별 (Person Re-Identification)을 λ™μ‹œμ— μˆ˜ν–‰ν•˜λ©° μƒμŠΉμž‘μš©μ„ μΌμœΌν‚€λŠ” μŠ€ν† λ¦¬ 속 캐릭터 인식 신경망 (Character in Story Identification Network) 을 μ†Œκ°œν•˜λ©°, λ§ˆμ§€λ§‰μœΌλ‘œ 자기 지도 ν•™μŠ΅(Self-supervised Learning)을 톡해 주의 인식(Attention) 기반 μ–Έμ–΄ λͺ¨λΈμ΄ κΈ΄ λΉ„λ””μ˜€μ— λŒ€ν•œ μ„€λͺ…을 μ—°κ΄€μ„± 있게 잘 생성할 수 μžˆλ„λ‘ μœ λ„ν•˜λŠ” 방법을 μ†Œκ°œν•œλ‹€. μš”μ•½ν•˜μžλ©΄, 이 ν•™μœ„ λ…Όλ¬Έμ—μ„œ μ œμ•ˆν•œ μƒˆλ‘œμš΄ 방법둠듀은 λΉ„λ””μ˜€-μ–Έμ–΄ ν•™μŠ΅μ— ν•΄λ‹Ήν•˜λŠ” λΉ„λ””μ˜€ μΊ‘μ…˜(Video captioning), λΉ„λ””μ˜€ 검색(Video Retrieval), μ‹œκ° μ§ˆμ˜μ‘λ‹΅(Video Question and Answering)등을 ν•΄κ²°ν•  수 μžˆλŠ” 기술적 λ””λ”€λŒμ΄ 되며, λΉ„λ””μ˜€ μΊ‘μ…˜ ν•™μŠ΅μ„ 톡해 ν•™μŠ΅λœ 주의 인식 λͺ¨λ“ˆμ€ 검색 및 μ§ˆμ˜μ‘λ‹΅, 인물 검색 λ“± 각 λ„€νŠΈμ›Œν¬μ— μ΄μ‹λ˜λ©΄μ„œ μƒˆλ‘œμš΄ λ¬Έμ œλ“€μ— λŒ€ν•΄ λ™μ‹œμ— 졜고 μˆ˜μ€€(State-of-the-art)의 μ„±λŠ₯을 λ‹¬μ„±ν•˜μ˜€λ‹€. 이λ₯Ό 톡해 λΉ„λ””μ˜€-μ–Έμ–΄ ν•™μŠ΅μœΌλ‘œ 얻은 μ–Έμ–΄ μ§€μ‹μ˜ 이전은 μ‹œκ°-청각을 μ•„μš°λ₯΄λŠ” λΉ„λ””μ˜€ λ©€ν‹°λͺ¨λ‹¬ ν•™μŠ΅μ— 큰 도움이 λ˜λŠ” 것을 μ‹€ν—˜μ μœΌλ‘œ 보여쀀닀. ν–₯ν›„ μž‘μ—… λ°©ν–₯ (Future Work)μœΌλ‘œλŠ” μ•žμ„œ μ—°κ΅¬ν•œ λ‚΄μš©λ“€μ„ 기반으둜 μ›Ή 속에 μ‘΄μž¬ν•˜λŠ” λŒ€κ·œλͺ¨μ˜ μ–Έμ–΄, λΉ„λ””μ˜€, μ˜€λ””μ˜€ 데이터λ₯Ό 톡합해 ν•™μŠ΅μ— ν™œμš©ν•˜μ—¬ μ‚°μ—…κ³„μ˜ λ§Žμ€ λ‚œμ œλ₯Ό ν•΄κ²°ν•  수 μžˆλŠ” 비지도 ν•™μŠ΅ λͺ¨λΈμ„ λ§Œλ“€κ³ μž ν•œλ‹€.Chapter 1 Introduction 1.1 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4 1.2 Outline of the thesis . . . . . . . . . . . . . . . . . . . . . . . . .8 Chapter 2 Related Work 2.1 Video Captioning . . . . . . . . . . . . . . . . . . . . . . . . . . .9 2.2 Video Retrieval with Natural Language . . . . . . . . . . . . . . 12 2.3 Video Question and Answering . . . . . . . . . . . . . . . . . . . 13 2.4 Cross-modal Representation Learning for Vision and LanguageTasks . . . . 15 Chapter 3 Human Attention Transfer for Video Captioning18 3.1 Introduction 3.2 Video Datasets for Caption and Gaze . . . . . . . . . . . . . . . 21 3.3 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.3.1 Video Pre-processing and Description . . . . . . . . . . . 22 3.3.2The Recurrent Gaze Prediction (RGP) Model . . . . . . . 23 3.3.3Construction of Visual Feature Pools . . . . . . . . . . . . 24 3.3.4The Decoder for Caption Generation . . . . . . . . . . . . 26 3.3.5Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28 3.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 3.4.1Evaluation of Gaze Prediction . . . . . . . . . . . . . . . . 29 3.4.2Evaluation of Video Captioning . . . . . . . . . . . . . . . 32 3.4.3Human Evaluation via AMT . . . . . . . . . . . . . . . . 35 3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 Chapter 4 Semantic Word Attention for Video QA and VideoCaptioning 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 4.1.1Related Work . . . . . . . . . . . . . . . . . . . . . . . . . 39 4.1.2Contributions . . . . . . . . . . . . . . . . . . . . . . . . . 40 4.2 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.2.1Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . 41 4.2.2An Attention Model for Concept Detection . . . . . . . . 42 4.2.3Video-to-Language Models . . . . . . . . . . . . . . . . . 45 4.2.4A Model for Description . . . . . . . . . . . . . . . . . . . 45 4.2.5A Model for Fill-in-the-Blank . . . . . . . . . . . . . . . . 48 4.2.6A Model for Multiple-Choice Test . . . . . . . . . . . . . 50 4.2.7A Model for Retrieval . . . . . . . . . . . . . . . . . . . . 51 4.3 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.3.1The LSMDC Dataset and Tasks . . . . . . . . . . . . . . 52 4.3.2Quantitative Results . . . . . . . . . . . . . . . . . . . . . 54 4.3.3Qualitative Results . . . . . . . . . . . . . . . . . . . . . . 56 4.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Chapter 5 Joint Sequnece Fusion Attention for Multimodal Sequence Data 5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59 5.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 5.3 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 5.3.1Preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . 63 5.3.2The Joint Semantic Tensor . . . . . . . . . . . . . . . . . 65 5.3.3The Convolutional Hierarchical Decoder . . . . . . . . . . 66 5.3.4An Illustrative Example of How the JSFusion Model Works 68 5.3.5Training . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69 5.3.6Implementation of Video-Language Models . . . . . . . . 69 5.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 5.4.1LSMDC Dataset and Tasks . . . . . . . . . . . . . . . . . 71 5.4.2MSR-VTT-(RET/MC) Dataset and Tasks . . . . . . . . . 73 5.4.3Quantitative Results . . . . . . . . . . . . . . . . . . . . . 74 5.4.4Qualitative Results . . . . . . . . . . . . . . . . . . . . . . 76 5.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78 Chapter 6 Character Re-Identification and Character Ground-ing for Movie Understanding 6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 6.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81 6.3 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 6.3.1Video Preprocessing . . . . . . . . . . . . . . . . . . . . . 84 6.3.2Visual Track Embedding . . . . . . . . . . . . . . . . . . . 85 6.3.3Textual Character Embedding . . . . . . . . . . . . . . . 86 6.3.4Character Grounding . . . . . . . . . . . . . . . . . . . . 87 6.3.5Re-Identification . . . . . . . . . . . . . . . . . . . . . . . 88 6.3.6Joint Training . . . . . . . . . . . . . . . . . . . . . . . . 90 6.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92 6.4.1Experimental Setup . . . . . . . . . . . . . . . . . . . . . 92 6.4.2Quantitative Results . . . . . . . . . . . . . . . . . . . . . 93 6.4.3Qualitative Results . . . . . . . . . . . . . . . . . . . . . . 95 6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Chapter 7 Transitional Adaptation of Pretrained Models forVisual Storytelling 7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 7.2 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 7.3 Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103 7.3.1The Visual Encoder . . . . . . . . . . . . . . . . . . . . . 104 7.3.2The Language Generator . . . . . . . . . . . . . . . . . . 104 7.3.3Adaptation training . . . . . . . . . . . . . . . . . . . . . 105 7.3.4The Sequential Coherence Loss . . . . . . . . . . . . . . . 105 7.3.5Training with the adaptation Loss . . . . . . . . . . . . . 107 7.3.6Fine-tuning and Inference . . . . . . . . . . . . . . . . . . 107 7.4 Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 7.4.1Experimental Setup . . . . . . . . . . . . . . . . . . . . . 109 7.4.2Quantitative Results . . . . . . . . . . . . . . . . . . . . . 112 7.4.3Further Analyses . . . . . . . . . . . . . . . . . . . . . . . 112 7.4.4Human Evaluation Results . . . . . . . . . . . . . . . . . 115 7.4.5Qualitative Results . . . . . . . . . . . . . . . . . . . . . . 116 7.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116 Chapter 8 Conclusion 8.1 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 8.2 Future Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Bibliography ... 123 μš”μ•½ ... 148 Acknowledgements ... 150Docto

    Dysphgia in Parkinson’s Disease

    Get PDF
    Dysphagia is a frequent symptom in Parkinson’s disease (PD) and the main cause of aspiration pneumonia and death of patients with PD. It is also associated with nutritional problems, pulmonary complications and quality of life of PD patients. The prevalence is very high in PD patients, varying from 77% to 95%, but exact pathophysiology and mechanism remains obscure. Dysphagia associated with PD has been reported to affect all stages of swallowing including oral, pharyngeal, esophageal phase, but oral and pharyngeal phases are more often abnormal than esophageal phase. There are several treatment strategies for dysphagia of PD patients such as rehabilitative treatment including speech therapy, respiratory muscle strengthening, pharmacologic treatment, deep brain stimulation and surgical treatment. But, the effects of these treatments are still limited, thus individualized and interdisciplinary approach is recommended.ope

    λΉ„λ””μ˜€ ν”„λ ˆμž„ 보간을 μœ„ν•œ ν…ŒμŠ€νŠΈ λ‹¨κ³„μ˜ 적응적 방법둠 연ꡬ

    Get PDF
    ν•™μœ„λ…Όλ¬Έ (박사) -- μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› : κ³΅κ³ΌλŒ€ν•™ 전기·컴퓨터곡학뢀, 2021. 2. 이경무.Computationally handling videos has been one of the foremost goals in computer vision. In particular, analyzing the complex dynamics including motion and occlusion between two frames is of fundamental importance in understanding the visual contents of a video. Research on video frame interpolation, a problem where the goal is to synthesize high-quality intermediate frames between the two input frames, specifically investigates the low-level characteristics within the consecutive frames of a video. The topic has been recently gaining increased popularity and can be applied to various real-world applications such as generating slow-motion effects, novel view synthesis, or video stabilization. Existing methods for video frame interpolation aim to design complex new architectures to effectively estimate and compensate for the motion between two input frames. However, natural videos contain a wide variety of different scenarios, including foreground/background appearance and motion, frame rate, and occlusion. Therefore, even with a huge amount of training data, it is difficult for a single model to generalize well on all possible situations. This dissertation introduces novel methodologies for test-time adaptation for tackling the problem of video frame interpolation. In particular, I propose to enable three different aspects of the deep-learning-based framework to be adaptive: (1) feature activation, (2) network weights, and (3) architectural structures. Specifically, I first present how adaptively scaling the feature activations of a deep neural network with respect to each input frame using attention models allows for accurate interpolation. Unlike the previous approaches that heavily depend on optical flow estimation models, the proposed channel-attention-based model can achieve high-quality frame synthesis without explicit motion estimation. Then, meta-learning is employed for fast adaptation of the parameter values of the frame interpolation models. By learning to adapt for each input video clips, the proposed framework can consistently improve the performance of many existing models with just a single gradient update to its parameters. Lastly, I introduce an input-adaptive dynamic architecture that can assign different inference paths with respect to each local region of the input frames. By deciding the scaling factors of the inputs and the network depth of the early exit in the interpolation model, the dynamic framework can greatly improve the computational efficiency while maintaining, and sometimes even outperforming the performance of the baseline interpolation method. The effectiveness of the proposed test-time adaptation methodologies is extensively evaluated with multiple benchmark datasets for video frame interpolation. Thorough ablation studies with various hyperparameter settings and baseline networks also demonstrate the superiority of adaptation to the test-time inputs, which is a new research direction orthogonal to the other state-of-the-art frame interpolation approaches.κ³„μ‚°μ μœΌλ‘œ λΉ„λ””μ˜€ 데이터λ₯Ό μ²˜λ¦¬ν•˜λŠ” 것은 컴퓨터 λΉ„μ „ λΆ„μ•Όμ˜ μ€‘μš”ν•œ λͺ©ν‘œ 쀑 ν•˜λ‚˜μ΄κ³ , 이λ₯Ό μœ„ν•΄μ„  두 λΉ„λ””μ˜€ ν”„λ ˆμž„ μ‚¬μ΄μ˜ μ›€μ§μž„κ³Ό 가리어짐 λ“±μ˜ λ³΅μž‘ν•œ 정보λ₯Ό λΆ„μ„ν•˜λŠ” 것이 ν•„μˆ˜μ μ΄λ‹€. λΉ„λ””μ˜€ ν”„λ ˆμž„ 보간법은 두 μž…λ ₯ ν”„λ ˆμž„ μ‚¬μ΄μ˜ 쀑간 ν”„λ ˆμž„μ„ μ •ν™•ν•˜κ²Œ μƒμ„±ν•˜λŠ” 것을 λͺ©ν‘œλ‘œ ν•˜λŠ” 문제둜, μ—°μ†λœ λΉ„λ””μ˜€ ν”„λ ˆμž„ μ‚¬μ΄μ˜ μ •λ°€ν•œ (ν™”μ†Œ λ‹¨μœ„μ˜) νŠΉμ§•λ“€μ„ μ›€μ§μž„κ³Ό 가리어짐을 κ³ λ €ν•˜μ—¬ λΆ„μ„ν•˜λ„λ‘ μ—°κ΅¬λ˜μ—ˆλ‹€. 이 λΆ„μ•ΌλŠ” 슬둜우λͺ¨μ…˜ 효과 생성, λ‹€λ₯Έ μ‹œμ μ—μ„œ λ°”λΌλ³΄λŠ” 물체 생성, 손떨림 보정 λ“± μ‹€μƒν™œμ˜ λ‹€μ–‘ν•œ μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜μ— ν™œμš©λ  수 있기 λ•Œλ¬Έμ— μ΅œκ·Όμ— λ§Žμ€ 관심을 λ°›κ³  μžˆλ‹€. 기쑴의 방법듀은 두 μž…λ ₯ ν”„λ ˆμž„ μ‚¬μ΄μ˜ ν”½μ…€ λ‹¨μœ„ μ›€μ§μž„ 정보λ₯Ό 효과적으둜 μ˜ˆμΈ‘ν•˜κ³  λ³΄μ™„ν•˜λŠ” λ°©ν–₯으둜 μ—°κ΅¬λ˜μ–΄μ™”λ‹€. ν•˜μ§€λ§Œ μ‹€μ œ λΉ„λ””μ˜€ λ°μ΄ν„°λŠ” λ‹€μ–‘ν•œ 물체듀 및 λ³΅μž‘ν•œ 배경의 μ›€μ§μž„, 이에 λ”°λ₯Έ 가리어짐, λΉ„λ””μ˜€λ§ˆλ‹€ λ‹¬λΌμ§€λŠ” ν”„λ ˆμž„μœ¨ λ“± 맀우 λ‹€μ–‘ν•œ ν™˜κ²½μ„ λ‹΄κ³  μžˆλ‹€. λ”°λΌμ„œ ν•˜λ‚˜μ˜ λͺ¨λΈλ‘œ λͺ¨λ“  ν™˜κ²½μ— 일반적으둜 잘 λ™μž‘ν•˜λŠ” λͺ¨λΈμ„ ν•™μŠ΅ν•˜λŠ” 것은 μˆ˜λ§Žμ€ ν•™μŠ΅ 데이터λ₯Ό ν™œμš©ν•˜μ—¬λ„ 맀우 μ–΄λ €μš΄ λ¬Έμ œμ΄λ‹€. λ³Έ ν•™μœ„ λ…Όλ¬Έμ—μ„œλŠ” λΉ„λ””μ˜€ ν”„λ ˆμž„ 보간 문제λ₯Ό ν•΄κ²°ν•˜κΈ° μœ„ν•œ ν…ŒμŠ€νŠΈ λ‹¨κ³„μ˜ 적응적 방법둠듀을 μ œμ‹œν•œλ‹€. 특히 λ”₯λŸ¬λ‹ 기반의 ν”„λ ˆμž„μ›Œν¬λ₯Ό μ μ‘μ μœΌλ‘œ λ§Œλ“€κΈ° μœ„ν•˜μ—¬ (1) 피쳐 ν™œμ„±λ„ (feature activation), (2) λͺ¨λΈμ˜ νŒŒλΌλ―Έν„°, 그리고 (3) λ„€νŠΈμ›Œν¬μ˜ ꡬ쑰λ₯Ό λ³€ν˜•ν•  수 μžˆλ„λ‘ ν•˜λŠ” μ„Έ κ°€μ§€μ˜ μ•Œκ³ λ¦¬μ¦˜μ„ μ œμ•ˆν•œλ‹€. 첫 번째 μ•Œκ³ λ¦¬μ¦˜μ€ λ”₯ 신경망 λ„€νŠΈμ›Œν¬μ˜ λ‚΄λΆ€ 피쳐 ν™œμ„±λ„μ˜ 크기λ₯Ό 각각의 μž…λ ₯ ν”„λ ˆμž„μ— 따라 μ μ‘μ μœΌλ‘œ μ‘°μ ˆν•˜λ„λ‘ ν•˜λ©°, μ–΄ν…μ…˜ λͺ¨λΈμ„ ν™œμš©ν•˜μ—¬ μ •ν™•ν•œ ν”„λ ˆμž„ 보간 μ„±λŠ₯을 얻을 수 μžˆμ—ˆλ‹€. μ˜΅ν‹°μ»¬ ν”Œλ‘œμš° 예츑 λͺ¨λΈμ„ ν™œμš©ν•˜μ—¬ ν”½μ…€ λ‹¨μœ„λ‘œ μ›€μ§μž„ 정보λ₯Ό μΆ”μΆœν•œ λŒ€λΆ€λΆ„μ˜ κΈ°μ‘΄ 방식듀과 달리, μ œμ•ˆν•œ 채널 μ–΄ν…μ…˜ 기반의 λͺ¨λΈμ€ λ³„λ„μ˜ λͺ¨μ…˜ λͺ¨λΈ 없이도 맀우 μ •ν™•ν•œ 쀑간 ν”„λ ˆμž„μ„ 생성할 수 μžˆλ‹€. 두 번째둜 μ œμ•ˆν•˜λŠ” μ•Œκ³ λ¦¬μ¦˜μ€ ν”„λ ˆμž„ 보간 λͺ¨λΈμ˜ 각 νŒŒλΌλ―Έν„° 값을 μ μ‘μ μœΌλ‘œ λ³€κ²½ν•  수 μžˆλ„λ‘ λ©”νƒ€λŸ¬λ‹ (meta-learning) 방법둠을 μ‚¬μš©ν•œλ‹€. 각각의 μž…λ ₯ λΉ„λ””μ˜€ μ‹œν€€μŠ€λ§ˆλ‹€ λͺ¨λΈμ˜ νŒŒλΌλ―Έν„° 값을 μ μ‘μ μœΌλ‘œ μ—…λ°μ΄νŠΈν•  수 μžˆλ„λ‘ ν•™μŠ΅μ‹œμΌœ 쀌으둜써, μ œμ‹œν•œ ν”„λ ˆμž„μ›Œν¬λŠ” 기쑴의 μ–΄λ–€ ν”„λ ˆμž„ 보간 λͺ¨λΈμ„ μ‚¬μš©ν•˜λ”λΌλ„ 단 ν•œ 번의 κ·ΈλΌλ””μ–ΈνŠΈ μ—…λ°μ΄νŠΈλ₯Ό 톡해 μΌκ΄€λœ μ„±λŠ₯ ν–₯상을 λ³΄μ˜€λ‹€. λ§ˆμ§€λ§‰μœΌλ‘œ, μž…λ ₯에 따라 λ„€νŠΈμ›Œν¬μ˜ ꡬ쑰가 λ™μ μœΌλ‘œ λ³€ν˜•λ˜λŠ” ν”„λ ˆμž„μ›Œν¬λ₯Ό μ œμ‹œν•˜μ—¬ κ³΅κ°„μ μœΌλ‘œ λΆ„ν• λœ ν”„λ ˆμž„μ˜ 각 μ§€μ—­λ§ˆλ‹€ μ„œλ‘œ λ‹€λ₯Έ μΆ”λ‘  경둜λ₯Ό ν†΅κ³Όν•˜κ³ , λΆˆν•„μš”ν•œ κ³„μ‚°λŸ‰μ„ 상당 λΆ€λΆ„ 쀄일 수 μžˆλ„λ‘ ν•œλ‹€. μ œμ•ˆν•˜λŠ” 동적 λ„€νŠΈμ›Œν¬λŠ” μž…λ ₯ ν”„λ ˆμž„μ˜ 크기와 ν”„λ ˆμž„ 보간 λͺ¨λΈμ˜ 깊이λ₯Ό μ‘°μ ˆν•¨μœΌλ‘œμ¨ 베이슀라인 λͺ¨λΈμ˜ μ„±λŠ₯을 μœ μ§€ν•˜λ©΄μ„œ 계산 νš¨μœ¨μ„±μ„ 크게 μ¦κ°€ν•˜μ˜€λ‹€. λ³Έ ν•™μœ„ λ…Όλ¬Έμ—μ„œ μ œμ•ˆν•œ μ„Έ κ°€μ§€μ˜ 적응적 λ°©λ²•λ‘ μ˜ νš¨κ³ΌλŠ” λΉ„λ””μ˜€ ν”„λ ˆμž„ 보간법을 μœ„ν•œ μ—¬λŸ¬ 벀치마크 데이터셋에 λ©΄λ°€ν•˜κ²Œ ν‰κ°€λ˜μ—ˆλ‹€. 특히, λ‹€μ–‘ν•œ ν•˜μ΄νΌνŒŒλΌλ―Έν„° μ„ΈνŒ…κ³Ό μ—¬λŸ¬ 베이슀라인 λͺ¨λΈμ— λŒ€ν•œ 비ꡐ, 뢄석 μ‹€ν—˜μ„ 톡해 ν…ŒμŠ€νŠΈ λ‹¨κ³„μ—μ„œμ˜ 적응적 방법둠에 λŒ€ν•œ 효과λ₯Ό μž…μ¦ν•˜μ˜€λ‹€. μ΄λŠ” λΉ„λ””μ˜€ ν”„λ ˆμž„ 보간법에 λŒ€ν•œ μ΅œμ‹  결과듀에 μΆ”κ°€μ μœΌλ‘œ 적용될 수 μžˆλŠ” μƒˆλ‘œμš΄ 연ꡬ λ°©λ²•μœΌλ‘œ, μΆ”ν›„ λ‹€λ°©λ©΄μœΌλ‘œμ˜ ν™•μž₯성이 κΈ°λŒ€λœλ‹€.1 Introduction 1 1.1 Motivations 1 1.2 Proposed method 3 1.3 Contributions 5 1.4 Organization of dissertation 6 2 Feature Adaptation based Approach 7 2.1 Introduction 7 2.2 Related works 10 2.2.1 Video frame interpolation 10 2.2.2 Attention mechanism 12 2.3 Proposed Method 12 2.3.1 Overview of network architecture 13 2.3.2 Main components 14 2.3.3 Loss 16 2.4 Understanding our model 17 2.4.1 Internal feature visualization 18 2.4.2 Intermediate image reconstruction 21 2.5 Experiments 23 2.5.1 Datasets 23 2.5.2 Implementation details 25 2.5.3 Comparison to the state-of-the-art 26 2.5.4 Ablation study 36 2.6 Summary 38 3 Meta-Learning based Approach 39 3.1 Introduction 39 3.2 Related works 42 3.3 Proposed method 44 3.3.1 Video frame interpolation problem set-up 44 3.3.2 Exploiting extra information at test time 45 3.3.3 Background on MAML 48 3.3.4 MetaVFI: Meta-learning for frame interpolation 49 3.4 Experiments 54 3.4.1 Settings 54 3.4.2 Meta-learning algorithm selection 56 3.4.3 Video frame interpolation results 58 3.4.4 Ablation studies 66 3.5 Summary 69 4 Dynamic Architecture based Approach 71 4.1 Introduction 71 4.2 Related works 75 4.2.1 Video frame interpolation 75 4.2.2 Adaptive inference 76 4.3 Proposed Method 77 4.3.1 Dynamic framework overview 77 4.3.2 Scale and depth finder (SD-finder) 80 4.3.3 Dynamic interpolation model 82 4.3.4 Training 83 4.4 Experiments 85 4.4.1 Datasets 85 4.4.2 Implementation details 86 4.4.3 Quantitative comparison 87 4.4.4 Visual comparison 93 4.4.5 Ablation study 97 4.5 Summary 100 5 Conclusion 103 5.1 Summary of dissertation 103 5.2 Future works 104 Bibliography 107 ꡭ문초둝 120Docto

    Comparison of the Macintosh Laryngoscope and the GlideScope Video Laryngoscope in a Cadaver Model of Foreign Body Airway Obstruction

    Get PDF
    Purpose: The GlideScope video laryngoscope (GL) has been known to help inexperienced health care providers become able to manage even difficult airways. The purpose of this study was to compare foreign body removal efficacies between the Macintosh laryngoscope (ML) and the GL in a setting of airway obstruction. Methods: Participants were asked to remove the simulated foreign body (2Γ—2 cm rice cake) from the supraglottic area of a freshly embalmed cadaver. This simulated a normal airway and a difficult airway with cervical spine immobilization. Participants performed the removal maneuver 4 times in random order using a Magill forceps with both the ML and the GL. We measured the time to removal (sec) and preference of the participant (5-point scale) and compared results according to the type of laryngoscope. Successful removal was defined as a removal time that was less than 120 sec. Results: Forty participants were enrolled in this simulation experiment. The success rate, time to removal and provider preference were not significantly different betweeh the two types of laryngoscope. In subgroup analysis for experienced providers, the time to removal was significantly shorter shorter in the ML group than the GL group (14 vs 20 sec, p<0.05). The preference of experienced provider was also significantly higher for ML than GL. Conclusion: This study suggests that ML has comparable efficacy for foreign body removal to GL and is acceptable to experienced providersope

    Cross-Layer Adaptive Streaming over HTTP

    Get PDF
    ν•™μœ„λ…Όλ¬Έ (석사)-- μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› : 컴퓨터곡학뢀, 2016. 2. κΆŒνƒœκ²½.졜근 λͺ¨λ°”일 κΈ°κΈ° λ³΄κΈ‰μ˜ 증가와 인터넷 λ„€νŠΈμ›Œν¬ μΈν”„λΌμ˜ ν™•λŒ€ 및 κΈ€λ‘œλ²Œ λΉ„λ””μ˜€ 슀트리밍 μ„œλΉ„μŠ€μ˜ λ“±μž₯으둜 λΉ„λ””μ˜€ μ½˜ν…μΈ μ— λŒ€ν•œ μˆ˜μš”κ°€ 폭발적으둜 μ¦κ°€ν–ˆλ‹€. κ·Έ κ²°κ³Ό ν˜„μž¬ 전체 인터넷 νŠΈλž˜ν”½ 쀑 슀트리밍이 μ°¨μ§€ν•˜λŠ” λΉ„μœ¨μ΄ 70%이상을 μ°¨μ§€ν•˜κ³  있으며 μ΄λŸ¬ν•œ μΆ”μ„ΈλŠ” 점차 κ°•ν™”λ˜κ³  μžˆλ‹€. μ΄λŸ¬ν•œ μƒν™©μ—μ„œ λͺ‡ λ…„ μ „λΆ€ν„° μ‚¬μš©μžμ˜ λ„€νŠΈμ›Œν¬ 상황을 μ˜ˆμΈ‘ν•˜μ—¬ 이에 맞좰 졜적의 ν™”μ§ˆμ„ μ œκ³΅ν•˜λŠ” HTTP기반의 μ μ‘ν˜• μŠ€νŠΈλ¦¬λ°μ— λŒ€ν•œ 연ꡬ가 ν™œλ°œνžˆ 이루어져 μ™”κ³ , λΉ„λ””μ˜€μ™€ μ˜€λ””μ˜€ ν‘œμ€€ν™”λ₯Ό μœ„ν•œ μ›Œν‚Ή 그룹인 MPEG(Moving Picture Expert Group)μ—μ„œλŠ” DASH (Dynamic Adaptive Streaming over HTTP) 기법을 ν‘œμ€€ν™” ν•˜μ˜€λ‹€. λ‹€μ–‘ν•œ 연ꡬλ₯Ό 톡해 ν΄λΌμ΄μ–ΈνŠΈ μΈ‘μ—μ„œ μ•Œκ³ λ¦¬μ¦˜μ„ 톡해 μ’€ 더 μ •ν™•ν•˜κ²Œ λ„€νŠΈμ›Œν¬ 상황을 μ˜ˆμΈ‘ν•˜λ €λŠ” μ‹œλ„κ°€ μžˆμ–΄μ™”μ§€λ§Œ 근본적으둜 DASHκ°€ TCP기반의 HTTPμœ„μ—μ„œ λ™μž‘ν•˜κΈ° λ•Œλ¬Έμ— κ·Έ μ„±λŠ₯이 TCP의 νŠΉμ„±μ— 크게 쒅속될 수 밖에 μ—†λ‹€. κ·ΈλŸ¬λ‚˜ TCP의 일차적인 λͺ©ν‘œλŠ” μ‹ λ’°μ„± μžˆλŠ” ν†΅μ‹ μœΌλ‘œ, μ‚¬μš©μž QoE(Quality of Experience)λΌλŠ” DASH의 μ΅œμ’… λͺ©ν‘œμ™€λŠ” μƒμΆ©λ˜λŠ” κ²½μš°κ°€ μ‘΄μž¬ν•œλ‹€. λ”°λΌμ„œ λ³Έ μ—°κ΅¬μ—μ„œλŠ” ν΄λΌμ΄μ–ΈνŠΈμ˜ TCP λ ˆμ΄μ–΄μ™€ DASH ν”Œλ ˆμ΄μ–΄κ°€ λ™μž‘ν•˜λŠ” μ–΄ν”Œλ¦¬μΌ€μ΄μ…˜ λ ˆμ΄μ–΄ κ°„μ˜ ν¬λ‘œμŠ€λ ˆμ΄μ–΄ μ μ‘ν˜• 슀트리밍 ν”„λ ˆμž„μ›Œν¬μΈ CLASHλ₯Ό μ œμ•ˆ ν•˜κ³ μž ν•œλ‹€. 이λ₯Ό 톡해 νŒ¨ν‚· 손싀과 같이 TCP에 영ν–₯을 μ£ΌλŠ” λ¬Έμ œκ°€ λ°œμƒν–ˆμ„ λ•Œμ—λ„ 슀트리밍의 ν’ˆμ§ˆμ— 영ν–₯을 μ΅œμ†Œν™” ν•  수 μžˆλ„λ‘ ν•˜λŠ” μ‹œμŠ€ν…œμ„ κ΅¬ν˜„ν•˜μ˜€κ³ , μ‹€ν—˜μ„ 톡해 이λ₯Ό 검증해 λ³΄μ•˜λ‹€.제 1μž₯ μ„œ λ‘  4 제 1절 μ—°κ΅¬μ˜ λ°°κ²½ 4 제 2절 λ…Όλ¬Έμ˜ ꡬ성 6 제 2μž₯ κΈ°μ‘΄ μ—°κ΅¬μ˜ ν•œκ³„μ  7 제 1절 전솑 ν”„λ‘œν† μ½œ (TCP) 7 제 2절 HTTP μ μ‘ν˜• 슀트리밍 ν”„λ‘œν† μ½œ (DASH) 8 제 3μž₯ CLASH ν”„λ ˆμž„μ›Œν¬ 11 제 1절 μ‚¬μš©μž 상황을 κ³ λ €ν•œ 크둜슀 λ ˆμ΄μ–΄ 슀트리밍 11 제 2절 CLASH의 κ΅¬μ„±μš”μ†Œ 및 λ™μž‘ 원리 14 제 3절 μ‹œμŠ€ν…œμ˜ κ΅¬ν˜„ 17 1. νŒ¨ν‚· λͺ¨λ‹ˆν„°λ§ μ‹œμŠ€ν…œ 17 2. ν΄λΌμ΄μ–ΈνŠΈ DASH μ•Œκ³ λ¦¬μ¦˜ 19 3. 버퍼 λͺ¨λ‹ˆν„°λ§ 기법 19 4. Raw μ†ŒμΌ“μ„ μ΄μš©ν•œ 둜슀 ν•˜μ΄λ“œ 기법 20 제 4μž₯ μ‹œμŠ€ν…œ 뢄석 및 평가 21 제 1절 μ‹€ν—˜ ν™˜κ²½ 21 제 2절 μ„±λŠ₯ 평가 23 제 3절 κ²°κ³Ό λ…Όμ˜ 27 제 5μž₯ κ²° λ‘  29 μ°Έκ³  λ¬Έν—Œ 31 Abstract 33Maste

    λ¬΄μ„ ν™˜κ²½μ—μ„œμ˜ RTPλ₯Ό μ΄μš©ν•œ μ‹€μ‹œκ°„ 원격 μ§„λ£Œ μ‹œμŠ€ν…œμ˜ 섀계 및 평가

    Get PDF
    생체곡학 ν˜‘λ™κ³Όμ •/석사[ν•œκΈ€]ν™˜κ²½μ˜ μ œμ•½μ΄ μ—†λŠ” 인터넷 기반의 μ‹€μ‹œκ°„ λ©€ν‹°λ―Έλ””μ–΄ μ„œλΉ„μŠ€λ₯Ό μ΄μš©ν•œ 원격 μ˜λ£ŒλŠ” μ‹œ, κ³΅κ°„μ˜ μ œμ•½ 없이 ν™˜μžλ₯Ό μ§„λ‹¨ν•˜κ³  μ μ ˆν•œ 쑰치λ₯Ό μ·¨ν•  수 있게 ν•˜μ—¬ ν™˜μžμ˜ μƒμ‘΄μœ¨, 회볡λ₯ μ— 긍정적 효과λ₯Ό λΌμΉœλ‹€. 이λ₯Ό μœ„ν•΄ 응급 의료 μ„œλΉ„μŠ€λŠ” λ‹€μ–‘ν•œ μ „λ¬Έμ˜μ˜ 쒅합적이고 λ™μ‹œμ μΈ μ„œλΉ„μŠ€κ°€ μ œκ³΅λ˜μ–΄μ•Ό ν•œλ‹€. 인터넷을 ν†΅ν•˜μ—¬ μ‹€μ‹œκ°„ λΉ„λ””μ˜€λ₯Ό μ „μ†‘ν•˜κΈ° μœ„ν•΄μ„œλŠ” μ±„λ„μ˜ μΆ©λΆ„ν•œ λŒ€μ—­ν­, 적은 지연과 적은 νŒ¨ν‚· 손싀 λ“±μ˜ 쑰건이 보μž₯λ˜μ–΄μ•Ό ν•œλ‹€. κ·ΈλŸ¬λ‚˜ ν˜„μž¬μ˜ 인터넷은 λΉ„λ””μ˜€ 전솑에 ν•„μš”ν•œ QoS (Quality of Service)λ₯Ό λ§Œμ‘±μ‹œν‚€κΈ° μœ„ν•΄ λ„€νŠΈμ›Œν¬ κ³„μΈ΅μ—μ„œ μ–΄λ– ν•œ κΈ°λŠ₯도 μ œκ³΅ν•˜μ§€ λͺ»ν•œλ‹€. λ”°λΌμ„œ μ‚¬μš©μžκ°€ μ›ν•˜λŠ” μ„œλΉ„μŠ€μ˜ ν’ˆμ§ˆ 보μž₯은 λ„€νŠΈμ›Œν¬ 계측 μƒμœ„μ—μ„œ μˆ˜ν–‰λ˜μ–΄μ•Όλ§Œ ν•œλ‹€. 이λ₯Ό μœ„ν•΄ μ œμ•ˆλœ 것이 μˆ˜μ†‘κ³„μΈ΅(transport layer) μœ„μ—μ„œ λ™μž‘ν•˜λŠ” μ‹€μ‹œκ°„ μˆ˜μ†‘ ν”„λ‘œν† μ½œ(RTP : Real-time Transport Protocol)κ³Ό μ‹€μ‹œκ°„ μˆ˜μ†‘μ œμ–΄ ν”„λ‘œν† μ½œ(RTCP : Real-time Transport Control Protocol)이닀. 이듀은 μ‘μš© 계측과 μˆ˜μ†‘κ³„μΈ΅ μ‚¬μ΄μ—μ„œ νŒ¨ν‚· 생성 μ‹œ μ‚½μž…λ˜λŠ” 뢀뢄이라고 μ΄ν•΄ν•˜λŠ” 것이 타당할 것이닀. μ‹€μ‹œκ°„ μˆ˜μ†‘ ν”„λ‘œν† μ½œ(RTP)κ³Ό μ‹€μ‹œκ°„ μˆ˜μ†‘μ œμ–΄ ν”„λ‘œν† μ½œ(RTCP)을 μ‚¬μš©ν•˜λ©΄, λΉ„λ””μ˜€ μ „μ†‘μ—μ„œ μ‹œκ°„μ œμ•½μ— λ”°λ₯Έ νŠΉμ„±μ„ κ³ λ €ν•΄ 쀄 수 있고, λ„€νŠΈμ›Œν¬ λ‚΄μ—μ„œ λ°œμƒν•˜λŠ” 손싀에 λŒ€μ²˜ν•  수 μžˆλ‹€.λ³Έ λ…Όλ¬Έμ—μ„œλŠ” RTP ν”„λ‘œν† μ½œμ„ μ‚¬μš©ν•œ μ‹€μ‹œκ°„ 원격 의료 μ§„λ£Œμ‹œμŠ€ν…œμ„ 섀계, κ΅¬ν˜„ν•˜μ˜€λ‹€. μš°μ„  의료 정보 데이터에 μ‹€μ‹œκ°„μ„±μ„ λΆ€μ—¬ν•˜κΈ° μœ„ν•˜μ—¬ UDP κΈ°λ°˜μ— RTPλ₯Ό μ μš©ν•˜μ˜€λ‹€. λ¬΄μ„ λ§μ—μ„œ TCPλ₯Ό μ‚¬μš©ν•˜μ˜€μ„ 경우 μ‹€μ‹œκ°„μ„±μ΄ μ™„λ²½νžˆ 보μž₯λ˜μ§€ μ•Šκ³  λ¬΄μ„ λ§μ—μ„œμ˜ 링크 μ—λŸ¬λ₯Ό 병λͺ©ν˜„상 λ“±μ˜ λ‹€λ₯Έ μ—λŸ¬λ‘œ μΈμ‹ν•˜κΈ° λ•Œλ¬Έμ— λΆˆν•„μš”ν•œ μœˆλ„μš°μ‚¬μ΄μ¦ˆ 쑰절둜 μΈν•œ λŒ€μ—­ν­ λ‚­λΉ„ν˜„μƒμ΄ μžˆμ—ˆλ‹€. 반면 UDPλŠ” 무쑰건적인 μ „μ†‘μœΌλ‘œ μΈν•œ λΉ„ 신뒰적인 전솑과 QoSκ°€ 보μž₯ λ˜μ§€ μ•ŠλŠ” 단점이 μžˆλ‹€. λ•Œλ¬Έμ— λ³Έ μ‹œμŠ€ν…œμ—μ„œλŠ” λŒ€μ—­ν­μ„ μΆ©λΆ„νžˆ ν™œμš©ν•  수 μžˆλŠ” UDP κΈ°λ°˜μ—μ„œ RTP ν”„λ‘œν† μ½œμ„ μ μš©ν•˜μ—¬ μ‹€μ‹œκ°„μ„±μ„ λΆ€μ—¬ν•˜κ³  μ€‘μš”ν•œ ν™˜μž λ°μ΄ν„°λŠ” μš°μ„ κΆŒμ„ λΆ€μ—¬ν•˜μ—¬ μ „μ†‘ν•˜μ˜€λ‹€. 이와 같은 μ•Œκ³ λ¦¬μ¦˜μ„ μ‚¬μš©ν•¨μœΌλ‘œμ¨ μ‘μš©κ³„μΈ΅ μ•„λž˜ λ‹¨μ—μ„œμ˜ μ‹€μ‹œκ°„μ„± 보μž₯ 효과λ₯Ό 확인할 수 μžˆμ—ˆλ‹€.두 λ²ˆμ§Έλ‘œλŠ” λ„€νŠΈμ›Œν¬μ˜ μƒνƒœμ— μœ λ™μ μœΌλ‘œ λŒ€μ‘ν•˜κΈ° μœ„ν•œ 전솑λ₯  μ œμ–΄ μ•Œκ³ λ¦¬μ¦˜μ„ μ œμ•ˆν•˜μ˜€λ‹€. μ •ν™•ν•˜κ³  주기적인 λ„€νŠΈμ›Œν¬ μ„±λŠ₯ μˆ˜μ§‘μ„ μœ„ν•˜μ—¬ RTCPλ₯Ό μ‚¬μš©ν•˜μ˜€λ‹€. RTCPλŠ” ν˜„μž¬ λ„€νŠΈμ›Œν¬μ˜ λŒ€μ—­ν­, 지연, PER λ“±μ˜ 정보λ₯Ό μˆ˜μ§‘ν•˜μ—¬ μ΅œλŒ€ 전솑 보μž₯ κ°€λŠ₯ν•œ 데이터 μ‚¬μ΄μ¦ˆλ₯Ό μ†‘μ‹ λ‹¨μ—κ²Œ μ•Œλ €μ€€λ‹€. 주어진 정보λ₯Ό λΆ„μ„ν•˜μ—¬ μ ν•©ν•œ 전솑λ₯ μ„ κ²°μ •, μ „μ†‘ν•¨μœΌλ‘œμ¨ μ•ˆμ •μ μΈ λŒ€μ—­ν­μ„ μ‚¬μš©ν•  수 있고, 무쑰건적인 μ „μ†‘μœΌλ‘œ μΈν•œ 병λͺ©ν˜„상을 λ°©μ§€ν•¨μœΌλ‘œμ¨ ν™˜μž μ˜μƒ ν’ˆμ§ˆμ˜ 보μž₯에 μš°μ›”ν•œ μ„±λŠ₯을 보일 수 μžˆμŒμ„ ν™•μΈν•˜μ˜€λ‹€. [영문]For quick and correct treatment of disease, real-time telemedicine system that enables medical examination and treatments made by several specialists simultaneously was designed. The system must transmit good quality data without concerning network that has different bandwidth by real time. The important issue of real-time telemedicine system is end-to-end delay constraint. Therefore, UDP(User Datagram Protocol) is more suitable for transporting multimedia data than TCP(Transmission Control Protocol). But UDP does not control network congestion and guarantee QoS(Quality of Service). There are many research results to complement UDP. One of these results is RTP(Real-Time Transport Protocol) and RTCP(Real-Time Transport Protocol). RTP and RTCP is mainly designed for use with UDP(User Datagram Protocol) for multimedia transport over the Internet. It has the capability of media-synchronization and network''s QoS feedback to compensate for the weakness of UDP. In this thesis, the RTP/RTCP UDP protocol is implemented over a windows pc system, and integrate with a MPEG-4 video codec system. The performance of channel rate control algorithms for VBR(Variable Bit Rate) video transmission is compared by the implemented test system. They are a TCP-Friendly rate control algorithm and a modified algorithm with a hounded minimum and maximum bitrate. It is shown that the modified algorithm provide wider range of controlled bit rates depending on the packet loss ratio and the throughput than the TCP or UDP algorithm.In this paper, we designed and evaluated the real-time multimedia telemedicine system using RTP over CDMA 1X-EVDO. To evaluate this system, we designed the telemedicine system which is based on the RTP. RTP can guarantee realtime transmission at the transport layer, that can''t guarantee at the network layer. and then we designed the RTCP protocol. The RTCP Packets can analyze network traffic and packet loss rate. Using RTCP report, we can control the QoS(Quality of Service) such as transmission rate control or priority control. The performances of the proposed algorithm and system in terms of throughput variation, RTT(Round Trip Time), jitter and PSNR(Peak Signal to Noise Ratio) were shown better performance more than UDP or native RTP.ope

    λ°•ν˜„κΈ°μ˜ 초기 μž‘ν’ˆμ— λ‚˜νƒ€λ‚œ νƒˆν‰λ©΄μ  κ²½ν–₯에 λŒ€ν•œ 연ꡬ

    Get PDF
    ν•™μœ„λ…Όλ¬Έ (석사)-- μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› : λ―Έμˆ λŒ€ν•™ ν˜‘λ™κ³Όμ •λ―Έμˆ κ²½μ˜, 2018. 8. μ •μ˜λͺ©.λ³Έ 논문은 λ°•ν˜„κΈ°(1942-2000)의 1974λ…„λΆ€ν„° 1985λ…„κΉŒμ§€ μ œμž‘λœ 초기 μž‘ν’ˆμ„ μ€‘μ‹¬μœΌλ‘œ λΆ„μ„ν•œ 연ꡬ이닀. λ°•ν˜„κΈ°λŠ” ν•œκ΅­ λΉ„λ””μ˜€ μ•„νŠΈμ˜ μ„ κ΅¬μžλΌκ³  뢈리며 λ―Έμˆ μ‚¬μ μœΌλ‘œ μ€‘μš”ν•œ μœ„μΉ˜μ— μžˆμŒμ—λ„ λΆˆκ΅¬ν•˜κ³ , 그에 λŒ€ν•œ μ—°κ΅¬λŠ” 비ꡐ적 적게 μ§„ν–‰λ˜μ—ˆμœΌλ©° μ—°κ΅¬μ˜ λ²”μœ„ λ˜ν•œ λΉ„λ””μ˜€ μž‘μ—… 뢀뢄에 νŽΈμ€‘λ˜μ–΄ μžˆμ—ˆλ‹€. 이에 따라 λ°•ν˜„κΈ°μ˜ μž‘ν’ˆμ„Έκ³„μ— λŒ€ν•œ ν•΄μ„μ˜ λ²”μœ„λ₯Ό λ„“νžˆκ³ μž 연ꡬλ₯Ό μ‹œμž‘ν•˜κ²Œ λ˜μ—ˆμœΌλ©°, 그의 μž‘ν’ˆμ„Έκ³„ μ€‘μ—μ„œλ„ λΉ„λ””μ˜€λΏλ§Œ μ•„λ‹ˆλΌ 였브제, ν™˜κ²½, ν–‰μœ„μ™€ 같은 λ‹€μ–‘ν•œ νƒˆν‰λ©΄ 미술 κ²½ν–₯을 μ‹œλ„ν•΄ λ³΄μ•˜λ˜ 1974λ…„λΆ€ν„° 1985λ…„κΉŒμ§€μ˜ 초기 μ‹€ν—˜ 단계에 μ§‘μ€‘ν–ˆλ‹€. 이λ₯Ό μœ„ν•΄ λ¨Όμ € λ°•ν˜„κΈ° μž‘ν’ˆμ˜ ν˜•μ„± 배경이 λ˜λŠ” 1970λ…„λŒ€λΆ€ν„° 1980λ…„λŒ€κΉŒμ§€μ˜ ν•œκ΅­λ―Έμˆ μ˜ 상황에 λŒ€ν•΄ μ‚΄νŽ΄λ³΄μ•˜λ‹€. 액포λ₯΄λ©œ 이후 1960λ…„λŒ€ 말 ν•œκ΅­ν™”λ‹¨μ—μ„œλŠ” νšŒν™”λΌλŠ” ν‰λ©΄μ—μ„œ λ²—μ–΄λ‚˜ μƒˆλ‘œμš΄ λ―Έμˆ μ„ μˆ˜μš©ν•˜κ³ μž ν•˜λŠ” μ›€μ§μž„μ΄ μ‹œμž‘λ˜μ—ˆλ‹€. 본격적으둜 1970λ…„λŒ€λΆ€ν„° λͺ‡λͺ‡ μ‹€ν—˜μ μΈ μ†Œκ·Έλ£Ήλ“€μ— μ˜ν•΄ 였브제, ν™˜κ²½, ν–‰μœ„ λ“±μ˜ νƒˆν‰λ©΄μ  미술 κ²½ν–₯이 μƒˆλ‘­κ²Œ μ‹œλ„ λ˜μ—ˆκ³ , κ·ΈλŸ¬ν•œ κ²½ν–₯은 1980λ…„λŒ€κΉŒμ§€ μ΄μ–΄μ‘Œλ‹€. 그리고 1970λ…„λŒ€ μ΄ˆμ€‘λ°˜λΆ€ν„° λŒ€κ΅¬ν™”λ‹¨μ—μ„œλ„ γ€ŠλŒ€κ΅¬ν˜„λŒ€λ―Έμˆ μ œγ€‹λ₯Ό μ€‘μ‹¬μœΌλ‘œ 미술의 νƒˆν‰λ©΄μ  κ²½ν–₯이 μƒˆλ‘­κ²Œ λͺ¨μƒ‰λ˜κΈ° μ‹œμž‘ν•˜μ˜€λŠ”λ°, μ΄λŠ” 1980λ…„λŒ€κΉŒμ§€ 이어져 λŒ€κ΅¬λ₯Ό 기반으둜 초기 μ‹œκΈ° μž‘κ°€ ν™œλ™μ„ ν•œ λ°•ν˜„κΈ°μ—κ²Œ 영ν–₯을 λ―Έμ³€λ‹€. μ΄λŸ¬ν•œ 배경을 λ°”νƒ•μœΌλ‘œ λ³Έλ¬Έμ—μ„œλŠ” 1974λ…„λΆ€ν„° 1985λ…„κΉŒμ§€μ˜ λ°•ν˜„κΈ°μ˜ 초기 μž‘ν’ˆμ„ 크게 μ„Έ 가지 κ²½ν–₯으둜 λ‚˜λˆ„μ–΄ μ‚΄νŽ΄λ³΄μ•˜λ‹€. μž‘ν’ˆμ΄ μ œμž‘λœ μ‹œκΈ° 순으둜 였브제, ν™˜κ²½, ν–‰μœ„μ— ν•΄λ‹Ήν•˜λŠ” μž‘ν’ˆμ„ λ©΄λ°€νžˆ μ‚΄νŽ΄λ΄„μœΌλ‘œμ¨, 결과적으둜 λ°•ν˜„κΈ°κ°€ λ§ν•˜κ³ μž ν–ˆλ˜ λΆˆμ™„μ „ν•œ 지각과 인식에 λŒ€ν•œ λ¬Έμ œμ— λŒ€ν•΄ κ³ μ°°ν•΄ λ³Έ 것이닀. λ”°λΌμ„œ λ³Έ μ—°κ΅¬λŠ” λ°•ν˜„κΈ°μ˜ μž‘ν’ˆμ„Έκ³„λ₯Ό 보닀 λ‹€μ–‘ν•˜κ²Œ κ³ μ°°ν•΄λ³΄μ•˜λ‹€λŠ” 점에 의의λ₯Ό λ‘κ³ μž ν•œλ‹€. μ£Όμš”μ–΄ : λ°•ν˜„κΈ°, νƒˆν‰λ©΄μ  κ²½ν–₯, 였브제, ν™˜κ²½, ν–‰μœ„, λŒ€κ΅¬ν˜„λŒ€λ―Έμˆ μ œ 1 μž₯ μ„œλ‘  1 제 1 절 연ꡬ λ°°κ²½ 및 λͺ©μ  1 제 2 절 연ꡬ λ‚΄μš© 2 제 3 절 μ„ ν–‰ 연ꡬ 3 제 2 μž₯ λ°•ν˜„κΈ°μ˜ μž‘ν’ˆ ν˜•μ„± λ°°κ²½ 6 제 1 절 1970-80λ…„λŒ€ νƒˆν‰λ©΄μ  κ²½ν–₯ 6 1. 였브제 11 2. ν™˜κ²½ 16 3. ν–‰μœ„ 20 제 2 절 λŒ€κ΅¬λ―Έμˆ κ³Ό λ°•ν˜„κΈ° 25 1. 1970-80λ…„λŒ€ νƒˆν‰λ©΄μ  κ²½ν–₯ 25 2. λ°•ν˜„κΈ°μ˜ ν™œλ™ 30 제 3 μž₯ λ°•ν˜„κΈ°μ˜ 초기 μž‘ν’ˆ 뢄석 36 제 1 절 였브제 36 제 2 절 ν™˜κ²½ 44 제 3 절 ν–‰μœ„ 53 제 4 μž₯ κ²°λ‘  61 μ°Έκ³ λ¬Έν—Œ 64 λ„νŒλͺ©λ‘ 72 도 판 77 Abstract 92Maste

    Study on Cold Air Flow Characteristics in a Domestic Refrigerator Using an Image Intensifier CCD Camera

    Get PDF
    Environmental concerns and the restriction policy on the energy efficiency require continuous improvements in refrigerator energy efficiency. Hence, it is very important to manufacture household refrigerators with the shroud form and cold air flow that are economically acceptable. This study has been performed on cold air flow characteristics in a domestic refrigerator which sizes are 400mm x 410mm x 670mm of the freezer and 510mm x 710mm x 670mm of the cold storage room with a fan. The domestic refrigerator is divided into two completely parts, which one is for cooling the freezer and the other is for cooling the food room. This domestic refrigerator has an evaporator, in which is located the freezer compartment. The experiments were conducted at three different internal temperatures in the freezer(10℃, 30℃). It was showed that the rate of cold air distribution into freezing room and cold storage room was almost 7 : 3. This experimental investigation indicates that the velocity of cold air flow of 40mm, 80mm, 370mm and 670mm sections are faster than other sections. The first objective of this work was the analysis of the result to obtain insight regarding the effects of the variables and to assess the cold air flow characteristics at three different internal temperatures in the freezer and at two different internal temperatures cold storage room. The second consideration was to develop refrigerator energy consumption. in other words, This paper presents to improve that the energy efficiency of the refrigerator.(5℃, 15℃, 30℃) and in the cold storage roomλͺ©μ°¨ Abstract = i κΈ°ν˜Έμ„€λͺ… = iii 제1μž₯ μ„œλ‘  = 1 제2μž₯ μ‹€ν—˜ 및 μ˜μƒμ²˜λ¦¬ = 3 2.1 μ‹€ν—˜μž₯치 = 3 2.2 PIV의 κ°œμš” = 6 2.3 PIV μ‹œμŠ€ν…œμ˜ ꡬ성 = 17 2.3.1 μ‘°λͺ… 및 μΆ”μ μž…μž = 17 2.3.2 ν™”μƒμž…λ ₯μž₯치 및 μ €μž₯μž₯치 = 19 2.3.3 이미지 λ³΄μ˜€λ“œ = 20 2.4μ˜μƒμ²˜λ¦¬ = 22 2.4.1 μ „μ²˜λ¦¬ = 22 2.4.2 λ™μΌμž…μž 좔적 = 27 2.4.3 ν›„μ²˜λ¦¬ = 34 제3μž₯ κ²°κ³Ό 및 κ²€ν†  = 35 3.1 λƒ‰λ™μ‹€μ˜μˆœκ°„ 및 μ‹œκ°„ν‰κ·  속도뢄포 = 35 3.2 냉μž₯μ‹€μ˜ μˆœκ°„ 및 μ‹œκ°„ν‰κ·  속도뢄포 = 72 3.3 ν‰κ· μš΄λ™μ—λ„ˆμ§€ 뢄포 = 88 3.4 냉동싀 및 냉μž₯μ‹€ 속도뢄포비ꡐ = 105 제4μž₯ κ²°λ‘  = 109 μ°Έκ³ λ¬Έν—Œ = 111 κ°μ‚¬μ˜ κΈ€ = 11
    • …
    corecore