3,554 research outputs found

    Contributions to the functional morphology of fishes

    Get PDF
    Zoologica Africana 2 (1): 31-4

    A new record of a moerisiid hydroid from South Africa

    Get PDF
    Zoologica Africana 5(2): 275-27

    Ultrafast nonlinear plasmonics

    Get PDF
    Metal nanostructures can enhance the optical signals by orders of magnitude due to surface plasmon resonance. This field enhancement of the plasmonic nanostructures has led to optical detection and light manipulation beyond the free space diffraction limit. However, the significant enhancement of optical signals of the nanostructures has not been fully understood. In order to examine field-enhanced phenomena, this dissertation studies a variety of plasmonic nanostructures using two nonlinear optical processes, multiphoton-absorption-induced luminescence (MAIL) and metal-enhanced multiphoton absorption polymerization (MEMAP). Nonlinear absorption of near-infrared light can lead to luminescence of metal nanostructures. This luminescence can be observed at localized areas of the nanostructures because of localized surface plasmon resonance and the “lightning rod” nanoantenna effect. In the presence of a prepolymer resin, luminescence generated from the nanostructures can induce polymerization by exciting a photoinitiator. The strong correlation between MAIL and MEMAP is demonstrated by using different excitation wavelengths and different types of prepolymer resins. While localized surface plasmon resonance plays a pivotal role in field-enhanced optical phenomena observed at local areas of gold nanoparticles, nanowires, and nanoplates, surface plasmon propagation is essential to understanding of the nonlinear optical properties in silver nanowires. As silver nanowires can support surface plasmon propagation for many microns, excitation of NIR light at one end of the nanowire can induce luminescence at the other end of the nanowire. This broadband luminescence can excite a photoinitiator, inducing polymerization. The luminescence-induced polymerization in remote positions can be used to assemble nanostructures. Nonlinear luminescence and its correlation to polymerization are also studied using carbon nanostructures. While metal nanostructures exhibit plasmonic field enhancement, carbon nanotubes have strong Coulomb interactions between excited electrons and holes, which results in luminescent emission. Additionally, the high density of electron states of carbon nanotubes can increase the probability of the recombination of the excited electron and hole, which in turn induce luminescence. The luminescence emission and photopolymerization are studied using different kinds of carbon nanostructures

    New Datasets, Models, and Optimization

    Get PDF
    ν•™μœ„λ…Όλ¬Έ(박사) -- μ„œμšΈλŒ€ν•™κ΅λŒ€ν•™μ› : κ³΅κ³ΌλŒ€ν•™ 전기·정보곡학뢀, 2021.8. μ†ν˜„νƒœ.사진 촬영의 ꢁ극적인 λͺ©ν‘œλŠ” κ³ ν’ˆμ§ˆμ˜ κΉ¨λ—ν•œ μ˜μƒμ„ μ–»λŠ” 것이닀. ν˜„μ‹€μ μœΌλ‘œ, μΌμƒμ˜ 사진은 자주 흔듀린 카메라와 μ›€μ§μ΄λŠ” 물체가 μžˆλŠ” 동적 ν™˜κ²½μ—μ„œ μ°λŠ”λ‹€. λ…ΈμΆœμ‹œκ°„ μ€‘μ˜ 카메라와 ν”Όμ‚¬μ²΄κ°„μ˜ μƒλŒ€μ μΈ μ›€μ§μž„μ€ 사진과 λ™μ˜μƒμ—μ„œ λͺ¨μ…˜ λΈ”λŸ¬λ₯Ό μΌμœΌν‚€λ©° μ‹œκ°μ μΈ ν™”μ§ˆμ„ μ €ν•˜μ‹œν‚¨λ‹€. 동적 ν™˜κ²½μ—μ„œ λΈ”λŸ¬μ˜ 세기와 μ›€μ§μž„μ˜ λͺ¨μ–‘은 맀 μ΄λ―Έμ§€λ§ˆλ‹€, 그리고 맀 ν”½μ…€λ§ˆλ‹€ λ‹€λ₯΄λ‹€. κ΅­μ§€μ μœΌλ‘œ λ³€ν™”ν•˜λŠ” λΈ”λŸ¬μ˜ μ„±μ§ˆμ€ 사진과 λ™μ˜μƒμ—μ„œμ˜ λͺ¨μ…˜ λΈ”λŸ¬ 제거λ₯Ό μ‹¬κ°ν•˜κ²Œ ν’€κΈ° μ–΄λ €μš°λ©° 해닡이 ν•˜λ‚˜λ‘œ 정해지지 μ•Šμ€, 잘 μ •μ˜λ˜μ§€ μ•Šμ€ 문제둜 λ§Œλ“ λ‹€. 물리적인 μ›€μ§μž„ λͺ¨λΈλ§μ„ 톡해 해석적인 접근법을 μ„€κ³„ν•˜κΈ°λ³΄λ‹€λŠ” λ¨Έμ‹ λŸ¬λ‹ 기반의 접근법은 μ΄λŸ¬ν•œ 잘 μ •μ˜λ˜μ§€ μ•Šμ€ 문제λ₯Ό ν‘ΈλŠ” 보닀 ν˜„μ‹€μ μΈ 닡이 될 수 μžˆλ‹€. 특히 λ”₯ λŸ¬λ‹μ€ 졜근 컴퓨터 λΉ„μ „ ν•™κ³„μ—μ„œ ν‘œμ€€μ μΈ 기법이 λ˜μ–΄ κ°€κ³  μžˆλ‹€. λ³Έ ν•™μœ„λ…Όλ¬Έμ€ 사진 및 λΉ„λ””μ˜€ λ””λΈ”λŸ¬λ§ λ¬Έμ œμ— λŒ€ν•΄ λ”₯ λŸ¬λ‹ 기반 μ†”λ£¨μ…˜μ„ λ„μž…ν•˜λ©° μ—¬λŸ¬ ν˜„μ‹€μ μΈ 문제λ₯Ό λ‹€κ°μ μœΌλ‘œ 닀룬닀. 첫 번째둜, λ””λΈ”λŸ¬λ§ 문제λ₯Ό 닀루기 μœ„ν•œ 데이터셋을 μ·¨λ“ν•˜λŠ” μƒˆλ‘œμš΄ 방법을 μ œμ•ˆν•œλ‹€. λͺ¨μ…˜ λΈ”λŸ¬κ°€ μžˆλŠ” 이미지와 κΉ¨λ—ν•œ 이미지λ₯Ό μ‹œκ°„μ μœΌλ‘œ μ •λ ¬λœ μƒνƒœλ‘œ λ™μ‹œμ— μ·¨λ“ν•˜λŠ” 것은 μ‰¬μš΄ 일이 μ•„λ‹ˆλ‹€. 데이터가 λΆ€μ‘±ν•œ 경우 λ””λΈ”λŸ¬λ§ μ•Œκ³ λ¦¬μ¦˜λ“€μ„ ν‰κ°€ν•˜λŠ” 것 뿐만 μ•„λ‹ˆλΌ μ§€λ„ν•™μŠ΅ 기법을 κ°œλ°œν•˜λŠ” 것도 λΆˆκ°€λŠ₯해진닀. κ·ΈλŸ¬λ‚˜ 고속 λΉ„λ””μ˜€λ₯Ό μ‚¬μš©ν•˜μ—¬ 카메라 μ˜μƒ 취득 νŒŒμ΄ν”„λΌμΈμ„ λͺ¨λ°©ν•˜λ©΄ μ‹€μ œμ μΈ λͺ¨μ…˜ λΈ”λŸ¬ 이미지λ₯Ό ν•©μ„±ν•˜λŠ” 것이 κ°€λŠ₯ν•˜λ‹€. 기쑴의 λΈ”λŸ¬ ν•©μ„± 기법듀과 달리 μ œμ•ˆν•˜λŠ” 방법은 μ—¬λŸ¬ μ›€μ§μ΄λŠ” 피사체듀과 λ‹€μ–‘ν•œ μ˜μƒ 깊이, μ›€μ§μž„ κ²½κ³„μ—μ„œμ˜ κ°€λ¦¬μ›Œμ§ λ“±μœΌλ‘œ μΈν•œ μžμ—°μŠ€λŸ¬μš΄ κ΅­μ†Œμ  λΈ”λŸ¬μ˜ λ³΅μž‘λ„λ₯Ό λ°˜μ˜ν•  수 μžˆλ‹€. 두 번째둜, μ œμ•ˆλœ 데이터셋에 κΈ°λ°˜ν•˜μ—¬ μƒˆλ‘œμš΄ λ‹¨μΌμ˜μƒ λ””λΈ”λŸ¬λ§μ„ μœ„ν•œ λ‰΄λŸ΄ λ„€νŠΈμ›Œν¬ ꡬ쑰λ₯Ό μ œμ•ˆν•œλ‹€. μ΅œμ ν™”κΈ°λ²• 기반 이미지 λ””λΈ”λŸ¬λ§ λ°©μ‹μ—μ„œ 널리 쓰이고 μžˆλŠ” 점차적 λ―Έμ„Έν™” 접근법을 λ°˜μ˜ν•˜μ—¬ λ‹€μ€‘κ·œλͺ¨ λ‰΄λŸ΄ λ„€νŠΈμ›Œν¬λ₯Ό μ„€κ³„ν•œλ‹€. μ œμ•ˆλœ λ‹€μ€‘κ·œλͺ¨ λͺ¨λΈμ€ λΉ„μŠ·ν•œ λ³΅μž‘λ„λ₯Ό 가진 λ‹¨μΌκ·œλͺ¨ λͺ¨λΈλ“€λ³΄λ‹€ 높은 볡원 정확도λ₯Ό 보인닀. μ„Έ 번째둜, λΉ„λ””μ˜€ λ””λΈ”λŸ¬λ§μ„ μœ„ν•œ μˆœν™˜ λ‰΄λŸ΄ λ„€νŠΈμ›Œν¬ λͺ¨λΈ ꡬ쑰λ₯Ό μ œμ•ˆν•œλ‹€. λ””λΈ”λŸ¬λ§μ„ 톡해 κ³ ν’ˆμ§ˆμ˜ λΉ„λ””μ˜€λ₯Ό μ–»κΈ° μœ„ν•΄μ„œλŠ” 각 ν”„λ ˆμž„κ°„μ˜ μ‹œκ°„μ μΈ 정보와 ν”„λ ˆμž„ 내뢀적인 정보λ₯Ό λͺ¨λ‘ μ‚¬μš©ν•΄μ•Ό ν•œλ‹€. μ œμ•ˆν•˜λŠ” λ‚΄λΆ€ν”„λ ˆμž„ 반볡적 μ—°μ‚°κ΅¬μ‘°λŠ” 두 정보λ₯Ό 효과적으둜 ν•¨κ»˜ μ‚¬μš©ν•¨μœΌλ‘œμ¨ λͺ¨λΈ νŒŒλΌλ―Έν„° 수λ₯Ό μ¦κ°€μ‹œν‚€μ§€ μ•Šκ³ λ„ λ””λΈ”λŸ¬ 정확도λ₯Ό ν–₯μƒμ‹œν‚¨λ‹€. λ§ˆμ§€λ§‰μœΌλ‘œ, μƒˆλ‘œμš΄ λ””λΈ”λŸ¬λ§ λͺ¨λΈλ“€μ„ 보닀 잘 μ΅œμ ν™”ν•˜κΈ° μœ„ν•΄ 둜슀 ν•¨μˆ˜λ₯Ό μ œμ•ˆν•œλ‹€. κΉ¨λ—ν•˜κ³  λ˜λ ·ν•œ 사진 ν•œ μž₯μœΌλ‘œλΆ€ν„° μžμ—°μŠ€λŸ¬μš΄ λͺ¨μ…˜ λΈ”λŸ¬λ₯Ό λ§Œλ“€μ–΄λ‚΄λŠ” 것은 λΈ”λŸ¬λ₯Ό μ œκ±°ν•˜λŠ” 것과 λ§ˆμ°¬κ°€μ§€λ‘œ μ–΄λ €μš΄ λ¬Έμ œμ΄λ‹€. κ·ΈλŸ¬λ‚˜ 톡상 μ‚¬μš©ν•˜λŠ” 둜슀 ν•¨μˆ˜λ‘œ 얻은 λ””λΈ”λŸ¬λ§ 방법듀은 λΈ”λŸ¬λ₯Ό μ™„μ „νžˆ μ œκ±°ν•˜μ§€ λͺ»ν•˜λ©° λ””λΈ”λŸ¬λœ μ΄λ―Έμ§€μ˜ λ‚¨μ•„μžˆλŠ” λΈ”λŸ¬λ‘œλΆ€ν„° μ›λž˜μ˜ λΈ”λŸ¬λ₯Ό μž¬κ±΄ν•  수 μžˆλ‹€. μ œμ•ˆν•˜λŠ” λ¦¬λΈ”λŸ¬λ§ 둜슀 ν•¨μˆ˜λŠ” λ””λΈ”λŸ¬λ§ μˆ˜ν–‰μ‹œ λͺ¨μ…˜ λΈ”λŸ¬λ₯Ό 보닀 잘 μ œκ±°ν•˜λ„λ‘ μ„€κ³„λ˜μ—ˆλ‹€. 이에 λ‚˜μ•„κ°€ μ œμ•ˆν•œ μžκΈ°μ§€λ„ν•™μŠ΅ κ³Όμ •μœΌλ‘œλΆ€ν„° ν…ŒμŠ€νŠΈμ‹œ λͺ¨λΈμ΄ μƒˆλ‘œμš΄ 데이터에 μ μ‘ν•˜λ„λ‘ ν•  수 μžˆλ‹€. μ΄λ ‡κ²Œ μ œμ•ˆλœ 데이터셋, λͺ¨λΈ ꡬ쑰, 그리고 둜슀 ν•¨μˆ˜λ₯Ό 톡해 λ”₯ λŸ¬λ‹μ— κΈ°λ°˜ν•˜μ—¬ 단일 μ˜μƒ 및 λΉ„λ””μ˜€ λ””λΈ”λŸ¬λ§ 기법듀을 μ œμ•ˆν•œλ‹€. κ΄‘λ²”μœ„ν•œ μ‹€ν—˜ κ²°κ³Όλ‘œλΆ€ν„° μ •λŸ‰μ  및 μ •μ„±μ μœΌλ‘œ μ΅œμ²¨λ‹¨ λ””λΈ”λŸ¬λ§ μ„±κ³Όλ₯Ό 증λͺ…ν•œλ‹€.Obtaining a high-quality clean image is the ultimate goal of photography. In practice, daily photography is often taken in dynamic environments with moving objects as well as shaken cameras. The relative motion between the camera and the objects during the exposure causes motion blur in images and videos, degrading the visual quality. The degree of blur strength and the shape of motion trajectory varies by every image and every pixel in dynamic environments. The locally-varying property makes the removal of motion blur in images and videos severely ill-posed. Rather than designing analytic solutions with physical modelings, using machine learning-based approaches can serve as a practical solution for such a highly ill-posed problem. Especially, deep-learning has been the recent standard in computer vision literature. This dissertation introduces deep learning-based solutions for image and video deblurring by tackling practical issues in various aspects. First, a new way of constructing the datasets for dynamic scene deblurring task is proposed. It is nontrivial to simultaneously obtain a pair of the blurry and the sharp image that are temporally aligned. The lack of data prevents the supervised learning techniques to be developed as well as the evaluation of deblurring algorithms. By mimicking the camera image pipeline with high-speed videos, realistic blurry images could be synthesized. In contrast to the previous blur synthesis methods, the proposed approach can reflect the natural complex local blur from and multiple moving objects, varying depth, and occlusion at motion boundaries. Second, based on the proposed datasets, a novel neural network architecture for single-image deblurring task is presented. Adopting the coarse-to-fine approach that is widely used in energy optimization-based methods for image deblurring, a multi-scale neural network architecture is derived. Compared with the single-scale model with similar complexity, the multi-scale model exhibits higher accuracy and faster speed. Third, a light-weight recurrent neural network model architecture for video deblurring is proposed. In order to obtain a high-quality video from deblurring, it is important to exploit the intrinsic information in the target frame as well as the temporal relation between the neighboring frames. Taking benefits from both sides, the proposed intra-frame iterative scheme applied to the RNNs achieves accuracy improvements without increasing the number of model parameters. Lastly, a novel loss function is proposed to better optimize the deblurring models. Estimating a dynamic blur for a clean and sharp image without given motion information is another ill-posed problem. While the goal of deblurring is to completely get rid of motion blur, conventional loss functions fail to train neural networks to fulfill the goal, leaving the trace of blur in the deblurred images. The proposed reblurring loss functions are designed to better eliminate the motion blur and to produce sharper images. Furthermore, the self-supervised learning process facilitates the adaptation of the deblurring model at test-time. With the proposed datasets, model architectures, and the loss functions, the deep learning-based single-image and video deblurring methods are presented. Extensive experimental results demonstrate the state-of-the-art performance both quantitatively and qualitatively.1 Introduction 1 2 Generating Datasets for Dynamic Scene Deblurring 7 2.1 Introduction 7 2.2 GOPRO dataset 9 2.3 REDS dataset 11 2.4 Conclusion 18 3 Deep Multi-Scale Convolutional Neural Networks for Single Image Deblurring 19 3.1 Introduction 19 3.1.1 Related Works 21 3.1.2 Kernel-Free Learning for Dynamic Scene Deblurring 23 3.2 Proposed Method 23 3.2.1 Model Architecture 23 3.2.2 Training 26 3.3 Experiments 29 3.3.1 Comparison on GOPRO Dataset 29 3.3.2 Comparison on Kohler Dataset 33 3.3.3 Comparison on Lai et al. [54] dataset 33 3.3.4 Comparison on Real Dynamic Scenes 34 3.3.5 Effect of Adversarial Loss 34 3.4 Conclusion 41 4 Intra-Frame Iterative RNNs for Video Deblurring 43 4.1 Introduction 43 4.2 Related Works 46 4.3 Proposed Method 50 4.3.1 Recurrent Video Deblurring Networks 51 4.3.2 Intra-Frame Iteration Model 52 4.3.3 Regularization by Stochastic Training 56 4.4 Experiments 58 4.4.1 Datasets 58 4.4.2 Implementation details 59 4.4.3 Comparisons on GOPRO [72] dataset 59 4.4.4 Comparisons on [97] Dataset and Real Videos 60 4.5 Conclusion 61 5 Learning Loss Functions for Image Deblurring 67 5.1 Introduction 67 5.2 Related Works 71 5.3 Proposed Method 73 5.3.1 Clean Images are Hard to Reblur 73 5.3.2 Supervision from Reblurring Loss 75 5.3.3 Test-time Adaptation by Self-Supervision 76 5.4 Experiments 78 5.4.1 Effect of Reblurring Loss 78 5.4.2 Effect of Sharpness Preservation Loss 80 5.4.3 Comparison with Other Perceptual Losses 81 5.4.4 Effect of Test-time Adaptation 81 5.4.5 Comparison with State-of-The-Art Methods 82 5.4.6 Real World Image Deblurring 85 5.4.7 Combining Reblurring Loss with Other Perceptual Losses 86 5.4.8 Perception vs. Distortion Trade-Off 87 5.4.9 Visual Comparison of Loss Function 88 5.4.10 Implementation Details 89 5.4.11 Determining Reblurring Module Size 94 5.5 Conclusion 95 6 Conclusion 97 κ΅­λ¬Έ 초둝 115 κ°μ‚¬μ˜ κΈ€ 117λ°•

    A Case Study on Plagiarism in Film Music: A Film Soundtrack Analysis on the movie β€œ300”

    Get PDF
    The objective of this paper is two-fold: Firstly, I, the writer, will attempt to analyze three music cues in the β€œ300” soundtrack, and study its use in the imagery of the film. Secondly, I will cross-reference these cues with the supposedly original sources to which these cues had plagiarized from. From this thesis, I hope to gather some findings, conclusions, and lessons that film composers can avoid in future.https://remix.berklee.edu/graduate-studies-scoring/1017/thumbnail.jp
    • …
    corecore