2,028 research outputs found

    Virtual Frame Technique: Ultrafast Imaging with Any Camera

    Full text link
    Many phenomena of interest in nature and industry occur rapidly and are difficult and cost-prohibitive to visualize properly without specialized cameras. Here we describe in detail the Virtual Frame Technique (VFT), a simple, useful, and accessible form of compressed sensing that increases the frame acquisition rate of any camera by several orders of magnitude by leveraging its dynamic range. VFT is a powerful tool for capturing rapid phenomenon where the dynamics facilitate a transition between two states, and are thus binary. The advantages of VFT are demonstrated by examining such dynamics in five physical processes at unprecedented rates and spatial resolution: fracture of an elastic solid, wetting of a solid surface, rapid fingerprint reading, peeling of adhesive tape, and impact of an elastic hemisphere on a hard surface. We show that the performance of the VFT exceeds that of any commercial high speed camera not only in rate of imaging but also in field of view, achieving a 65MHz frame rate at 4MPx resolution. Finally, we discuss the performance of the VFT with several commercially available conventional and high-speed cameras. In principle, modern cell phones can achieve imaging rates of over a million frames per second using the VFT.Comment: 7 Pages, 4 Figures, 1 Supplementary Vide

    Evaluation of the effectiveness of HDR tone-mapping operators for photogrammetric applications

    Get PDF
    [EN] The ability of High Dynamic Range (HDR) imaging to capture the full range of lighting in a scene has meant that it is being increasingly used for Cultural Heritage (CH) applications. Photogrammetric techniques allow the semi-automatic production of 3D models from a sequence of images. Current photogrammetric methods are not always effective in reconstructing images under harsh lighting conditions, as significant geometric details may not have been captured accurately within under- and over-exposed regions of the image. HDR imaging offers the possibility to overcome this limitation, however the HDR images need to be tone mapped before they can be used within existing photogrammetric algorithms. In this paper we evaluate four different HDR tone-mapping operators (TMOs) that have been used to convert raw HDR images into a format suitable for state-of-the-art algorithms, and in particular keypoint detection techniques. The evaluation criteria used are the number of keypoints, the number of valid matches achieved and the repeatability rate. The comparison considers two local and two global TMOs. HDR data from four CH sites were used: Kaisariani Monastery (Greece), Asinou Church (Cyprus), ChΓ’teau des Baux (France) and Buonconsiglio Castle (Italy).We would like to thank Kurt Debattista, Timothy Bradley, Ratnajit Mukherjee, Diego Bellido CastaΓ±eda and TomBashford Rogers for their suggestions, help and encouragement. We would like to thank the hosting institutions: 3D Optical Metrology Group, FBK (Trento, Italy) and UMR 3495 MAP CNRS/MCC (Marseille, France), for their support during the data acquisition campaigns. This project has received funding from the European Union’s 7 th Framework Programme for research, technological development and demonstration under grant agreement No. 608013, titled β€œITN-DCH: Initial Training Network for Digital Cultural Heritage: Projecting our Past to the Future”.Suma, R.; Stavropoulou, G.; Stathopoulou, EK.; Van Gool, L.; Georgopoulos, A.; Chalmers, A. (2016). Evaluation of the effectiveness of HDR tone-mapping operators for photogrammetric applications. Virtual Archaeology Review. 7(15):54-66. https://doi.org/10.4995/var.2016.6319SWORD546671

    Computational Video Enhancement

    Get PDF
    During a video, each scene element is often imaged many times by the sensor. I propose that by combining information from each captured frame throughout the video it is possible to enhance the entire video. This concept is the basis of computational video enhancement. In this dissertation, the viability of computational video processing is explored in addition to presenting applications where this processing method can be leveraged. Spatio-temporal volumes are employed as a framework for efficient computational video processing, and I extend them by introducing sheared volumes. Shearing provides spatial frame warping for alignment between frames, allowing temporally-adjacent samples to be processed using traditional editing and filtering approaches. An efficient filter-graph framework is presented to support this processing along with a prototype video editing and manipulation tool utilizing that framework. To demonstrate the integration of samples from multiple frames, I introduce methods for improving poorly exposed low-light videos to achieve improved results. This integration is guided by a tone-mapping process to determine spatially-varying optimal exposures and an adaptive spatio-temporal filter to integrate the samples. Low-light video enhancement is also addressed in the multispectral domain by combining visible and infrared samples. This is facilitated by the use of a novel multispectral edge-preserving filter to enhance only the visible spectrum video. Finally, the temporal characteristics of videos are altered by a computational video resampling process. By resampling the video-rate footage, novel time-lapse sequences are found that optimize for user-specified characteristics. Each resulting shorter video is a more faithful summary of the original source than a traditional time-lapse video. Simultaneously, new synthetic exposures are generated to alter the output video's aliasing characteristics

    Parallel Implementation of a Real-Time High Dynamic Range Video System

    Full text link
    Abstract. This article describes the use of the parallel processing capabilities of a graphics chip to increase the processing speed of a high dynamic range (HDR) video system. The basis is an existing HDR video system that produces each frame from a sequence of regular images taken in quick succession under varying exposure settings. The image sequence is processed in a pipeline consisting of: shutter speeds selection, capturing, color space conversion, image registration, HDR stitching, and tone mapping. This article identifies bottlenecks in the pipeline and describes modifications to the algorithms that are necessary to enable parallel processing. Time-critical steps are processed on a graphics processing unit (GPU). The resulting processing time is evaluated and compared to the original sequential code. The creation of an HDR video frame is sped up by a factor of 15 on the average

    νŠΉμ§• ν˜Όν•© λ„€νŠΈμ›Œν¬λ₯Ό μ΄μš©ν•œ μ˜μƒ μ •ν•© 기법과 κ³  λͺ…μ•”λΉ„ μ˜μƒλ²• 및 λΉ„λ””μ˜€ κ³  ν•΄μƒν™”μ—μ„œμ˜ μ‘μš©

    Get PDF
    ν•™μœ„λ…Όλ¬Έ (박사) -- μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› : κ³΅κ³ΌλŒ€ν•™ 전기·컴퓨터곡학뢀, 2020. 8. 쑰남읡.This dissertation presents a deep end-to-end network for high dynamic range (HDR) imaging of dynamic scenes with background and foreground motions. Generating an HDR image from a sequence of multi-exposure images is a challenging process when the images have misalignments by being taken in a dynamic situation. Hence, recent methods first align the multi-exposure images to the reference by using patch matching, optical flow, homography transformation, or attention module before the merging. In this dissertation, a deep network that synthesizes the aligned images as a result of blending the information from multi-exposure images is proposed, because explicitly aligning photos with different exposures is inherently a difficult problem. Specifically, the proposed network generates under/over-exposure images that are structurally aligned to the reference, by blending all the information from the dynamic multi-exposure images. The primary idea is that blending two images in the deep-feature-domain is effective for synthesizing multi-exposure images that are structurally aligned to the reference, resulting in better-aligned images than the pixel-domain blending or geometric transformation methods. Specifically, the proposed alignment network consists of a two-way encoder for extracting features from two images separately, several convolution layers for blending deep features, and a decoder for constructing the aligned images. The proposed network is shown to generate the aligned images with a wide range of exposure differences very well and thus can be effectively used for the HDR imaging of dynamic scenes. Moreover, by adding a simple merging network after the alignment network and training the overall system end-to-end, a performance gain compared to the recent state-of-the-art methods is obtained. This dissertation also presents a deep end-to-end network for video super-resolution (VSR) of frames with motions. To reconstruct an HR frame from a sequence of adjacent frames is a challenging process when the images have misalignments. Hence, recent methods first align the adjacent frames to the reference by using optical flow or adding spatial transformer network (STN). In this dissertation, a deep network that synthesizes the aligned frames as a result of blending the information from adjacent frames is proposed, because explicitly aligning frames is inherently a difficult problem. Specifically, the proposed network generates adjacent frames that are structurally aligned to the reference, by blending all the information from the neighbor frames. The primary idea is that blending two images in the deep-feature-domain is effective for synthesizing frames that are structurally aligned to the reference, resulting in better-aligned images than the pixel-domain blending or geometric transformation methods. Specifically, the proposed alignment network consists of a two-way encoder for extracting features from two images separately, several convolution layers for blending deep features, and a decoder for constructing the aligned images. The proposed network is shown to generate the aligned frames very well and thus can be effectively used for the VSR. Moreover, by adding a simple reconstruction network after the alignment network and training the overall system end-to-end, A performance gain compared to the recent state-of-the-art methods is obtained. In addition to each HDR imaging and VSR network, this dissertation presents a deep end-to-end network for joint HDR-SR of dynamic scenes with background and foreground motions. The proposed HDR imaging and VSR networks enhace the dynamic range and the resolution of images, respectively. However, they can be enhanced simultaneously by a single network. In this dissertation, the network which has same structure of the proposed VSR network is proposed. The network is shown to reconstruct the final results which have higher dynamic range and resolution. It is compared with several methods designed with existing HDR imaging and VSR networks, and shows both qualitatively and quantitatively better results.λ³Έ ν•™μœ„λ…Όλ¬Έμ€ λ°°κ²½ 및 μ „κ²½μ˜ μ›€μ§μž„μ΄ μžˆλŠ” μƒν™©μ—μ„œ κ³  λͺ…μ•”λΉ„ μ˜μƒλ²•μ„ μœ„ν•œ λ”₯ λŸ¬λ‹ λ„€νŠΈμ›Œν¬λ₯Ό μ œμ•ˆν•œλ‹€. μ›€μ§μž„μ΄ μžˆλŠ” μƒν™©μ—μ„œ 촬영된 λ…ΈμΆœμ΄ λ‹€λ₯Έ μ—¬λŸ¬ 영 상듀을 μ΄μš©ν•˜μ—¬ κ³  λͺ…μ•”λΉ„ μ˜μƒμ„ μƒμ„±ν•˜λŠ” 것은 맀우 μ–΄λ €μš΄ μž‘μ—…μ΄λ‹€. κ·Έλ ‡κΈ° λ•Œλ¬Έμ—, μ΅œκ·Όμ— μ œμ•ˆλœ 방법듀은 이미지듀을 ν•©μ„±ν•˜κΈ° 전에 패치 맀칭, μ˜΅ν‹°μ»¬ ν”Œλ‘œμš°, 호λͺ¨κ·Έλž˜ν”Ό λ³€ν™˜ 등을 μ΄μš©ν•˜μ—¬ κ·Έ 이미지듀을 λ¨Όμ € μ •λ ¬ν•œλ‹€. μ‹€μ œλ‘œ λ…ΈμΆœ 정도가 λ‹€λ₯Έ μ—¬λŸ¬ 이미지듀을 μ •λ ¬ν•˜λŠ” 것은 μ•„μ£Ό μ–΄λ €μš΄ μž‘μ—…μ΄κΈ° λ•Œλ¬Έμ—, 이 λ…Όλ¬Έμ—μ„œλŠ” μ—¬λŸ¬ μ΄λ―Έμ§€λ“€λ‘œλΆ€ν„° 얻은 정보λ₯Ό μ„žμ–΄μ„œ μ •λ ¬λœ 이미지λ₯Ό ν•©μ„±ν•˜λŠ” λ„€νŠΈμ›Œν¬λ₯Ό μ œμ•ˆν•œλ‹€. 특히, μ œμ•ˆν•˜λŠ” λ„€νŠΈμ›Œν¬λŠ” 더 밝게 ν˜Ήμ€ μ–΄λ‘‘κ²Œ 촬영된 이미지듀을 쀑간 밝기둜 촬영된 이미지λ₯Ό κΈ°μ€€μœΌλ‘œ μ •λ ¬ν•œλ‹€. μ£Όμš”ν•œ μ•„μ΄λ””μ–΄λŠ” μ •λ ¬λœ 이미지λ₯Ό ν•©μ„±ν•  λ•Œ νŠΉμ§• λ„λ©”μΈμ—μ„œ ν•©μ„±ν•˜λŠ” 것이며, μ΄λŠ” ν”½μ…€ λ„λ©”μΈμ—μ„œ ν•©μ„±ν•˜κ±°λ‚˜ κΈ°ν•˜ν•™μ  λ³€ν™˜μ„ μ΄μš©ν•  λ•Œ 보닀 더 쒋은 μ •λ ¬ κ²°κ³Όλ₯Ό κ°–λŠ”λ‹€. 특히, μ œμ•ˆν•˜λŠ” μ •λ ¬ λ„€νŠΈμ›Œν¬λŠ” 두 갈래의 인코더와 μ»¨λ³Όλ£¨μ…˜ λ ˆμ΄μ–΄λ“€ 그리고 λ””μ½”λ”λ‘œ 이루어져 μžˆλ‹€. 인코더듀은 두 μž…λ ₯ μ΄λ―Έμ§€λ‘œλΆ€ν„° νŠΉμ§•μ„ μΆ”μΆœν•˜κ³ , μ»¨λ³Όλ£¨μ…˜ λ ˆμ΄μ–΄λ“€μ΄ 이 νŠΉμ§•λ“€μ„ μ„žλŠ”λ‹€. λ§ˆμ§€λ§‰μœΌλ‘œ λ””μ½”λ”μ—μ„œ μ •λ ¬λœ 이미지λ₯Ό μƒμ„±ν•œλ‹€. μ œμ•ˆν•˜λŠ” λ„€νŠΈμ›Œν¬λŠ” κ³  λͺ…μ•”λΉ„ μ˜μƒλ²•μ—μ„œ μ‚¬μš©λ  수 μžˆλ„λ‘ λ…ΈμΆœ 정도가 크게 μ°¨μ΄λ‚˜λŠ” μ˜μƒμ—μ„œλ„ 잘 μž‘λ™ν•œλ‹€. κ²Œλ‹€κ°€, κ°„λ‹¨ν•œ 병합 λ„€νŠΈμ›Œν¬λ₯Ό μΆ”κ°€ν•˜κ³  전체 λ„€νŠΈμ›Œν¬λ“€μ„ ν•œ λ²ˆμ— ν•™μŠ΅ν•¨μœΌλ‘œμ„œ, μ΅œκ·Όμ— μ œμ•ˆλœ 방법듀 보닀 더 쒋은 μ„±λŠ₯을 κ°–λŠ”λ‹€. λ˜ν•œ, λ³Έ ν•™μœ„λ…Όλ¬Έμ€ λ™μ˜μƒ λ‚΄ ν”„λ ˆμž„λ“€μ„ μ΄μš©ν•˜λŠ” λΉ„λ””μ˜€ κ³  해상화 방법을 μœ„ν•œ λ”₯ λŸ¬λ‹ λ„€νŠΈμ›Œν¬λ₯Ό μ œμ•ˆν•œλ‹€. λ™μ˜μƒ λ‚΄ μΈμ ‘ν•œ ν”„λ ˆμž„λ“€ μ‚¬μ΄μ—λŠ” μ›€μ§μž„μ΄ μ‘΄μž¬ν•˜κΈ° λ•Œλ¬Έμ—, 이듀을 μ΄μš©ν•˜μ—¬ κ³  ν•΄μƒλ„μ˜ ν”„λ ˆμž„μ„ ν•©μ„±ν•˜λŠ” 것은 μ•„μ£Ό μ–΄λ €μš΄ μž‘μ—…μ΄λ‹€. λ”°λΌμ„œ, μ΅œκ·Όμ— μ œμ•ˆλœ 방법듀은 이 μΈμ ‘ν•œ ν”„λ ˆμž„λ“€μ„ μ •λ ¬ν•˜κΈ° μœ„ν•΄ μ˜΅ν‹°μ»¬ ν”Œλ‘œμš°λ₯Ό κ³„μ‚°ν•˜κ±°λ‚˜ STN을 μΆ”κ°€ν•œλ‹€. μ›€μ§μž„μ΄ μ‘΄μž¬ν•˜λŠ” ν”„λ ˆμž„λ“€μ„ μ •λ ¬ν•˜λŠ” 것은 μ–΄λ €μš΄ 과정이기 λ•Œλ¬Έμ—, 이 λ…Όλ¬Έμ—μ„œλŠ” μΈμ ‘ν•œ ν”„λ ˆμž„λ“€λ‘œλΆ€ν„° 얻은 정보λ₯Ό μ„žμ–΄μ„œ μ •λ ¬λœ ν”„λ ˆμž„μ„ ν•©μ„±ν•˜λŠ” λ„€νŠΈμ›Œν¬λ₯Ό μ œμ•ˆν•œλ‹€. 특히, μ œμ•ˆν•˜λŠ” λ„€νŠΈμ›Œν¬λŠ” μ΄μ›ƒν•œ ν”„λ ˆμž„λ“€μ„ λͺ©ν‘œ ν”„λ ˆμž„μ„ κΈ°μ€€μœΌλ‘œ μ •λ ¬ν•œλ‹€. λ§ˆμ°¬κ°€μ§€λ‘œ μ£Όμš” μ•„μ΄λ””μ–΄λŠ” μ •λ ¬λœ ν”„λ ˆμž„μ„ ν•©μ„±ν•  λ•Œ νŠΉμ§• λ„λ©”μΈμ—μ„œ ν•©μ„±ν•˜λŠ” 것이닀. μ΄λŠ” ν”½μ…€ λ„λ©”μΈμ—μ„œ ν•©μ„±ν•˜κ±°λ‚˜ κΈ°ν•˜ν•™μ  λ³€ν™˜μ„ μ΄μš©ν•  λ•Œ 보닀 더 쒋은 μ •λ ¬ κ²°κ³Όλ₯Ό κ°–λŠ”λ‹€. 특히, μ œμ•ˆν•˜λŠ” μ •λ ¬ λ„€νŠΈμ›Œν¬λŠ” 두 갈래의 인코더와 μ»¨λ³Όλ£¨μ…˜ λ ˆμ΄μ–΄λ“€ 그리고 λ””μ½”λ”λ‘œ 이루어져 μžˆλ‹€. 인코더듀은 두 μž…λ ₯ ν”„λ ˆμž„μœΌλ‘œλΆ€ν„° νŠΉμ§•μ„ μΆ”μΆœν•˜κ³ , μ»¨λ³Όλ£¨μ…˜ λ ˆμ΄μ–΄λ“€μ΄ 이 νŠΉμ§•λ“€μ„ μ„žλŠ”λ‹€. λ§ˆμ§€λ§‰μœΌλ‘œ λ””μ½”λ”μ—μ„œ μ •λ ¬λœ ν”„λ ˆμž„μ„ μƒμ„±ν•œλ‹€. μ œμ•ˆν•˜λŠ” λ„€νŠΈμ›Œν¬λŠ” μΈμ ‘ν•œ ν”„λ ˆμž„λ“€μ„ 잘 μ •λ ¬ν•˜λ©°, λΉ„λ””μ˜€ κ³  해상화에 효과적으둜 μ‚¬μš©λ  수 μžˆλ‹€. κ²Œλ‹€κ°€ 병합 λ„€νŠΈμ›Œν¬λ₯Ό μΆ”κ°€ν•˜κ³  전체 λ„€νŠΈμ›Œν¬λ“€μ„ ν•œ λ²ˆμ— ν•™μŠ΅ν•¨μœΌλ‘œμ„œ, μ΅œκ·Όμ— μ œμ•ˆλœ μ—¬λŸ¬ 방법듀 보닀 더 쒋은 μ„±λŠ₯을 κ°–λŠ”λ‹€. κ³  λͺ…μ•”λΉ„ μ˜μƒλ²•κ³Ό λΉ„λ””μ˜€ κ³  해상화에 λ”ν•˜μ—¬, λ³Έ ν•™μœ„λ…Όλ¬Έμ€ λͺ…암비와 해상도λ₯Ό ν•œ λ²ˆμ— ν–₯μƒμ‹œν‚€λŠ” λ”₯ λ„€νŠΈμ›Œν¬λ₯Ό μ œμ•ˆν•œλ‹€. μ•žμ—μ„œ μ œμ•ˆλœ 두 λ„€νŠΈμ›Œν¬λ“€μ€ 각각 λͺ…암비와 해상도λ₯Ό ν–₯μƒμ‹œν‚¨λ‹€. ν•˜μ§€λ§Œ, 그듀은 ν•˜λ‚˜μ˜ λ„€νŠΈμ›Œν¬λ₯Ό 톡해 ν•œ λ²ˆμ— ν–₯상될 수 μžˆλ‹€. 이 λ…Όλ¬Έμ—μ„œλŠ” λΉ„λ””μ˜€ 고해상화λ₯Ό μœ„ν•΄ μ œμ•ˆν•œ λ„€νŠΈμ›Œν¬μ™€ 같은 ꡬ쑰의 λ„€νŠΈμ›Œν¬λ₯Ό μ΄μš©ν•˜λ©°, 더 높은 λͺ…암비와 해상도λ₯Ό κ°–λŠ” μ΅œμ’… κ²°κ³Όλ₯Ό 생성해낼 수 μžˆλ‹€. 이 방법은 기쑴의 κ³  λͺ…μ•”λΉ„ μ˜μƒλ²•κ³Ό λΉ„λ””μ˜€ 고해상화λ₯Ό μœ„ν•œ λ„€νŠΈμ›Œν¬λ“€μ„ μ‘°ν•©ν•˜λŠ” 것 보닀 μ •μ„±μ μœΌλ‘œ 그리고 μ •λŸ‰μ μœΌλ‘œ 더 쒋은 κ²°κ³Όλ₯Ό λ§Œλ“€μ–΄ λ‚Έλ‹€.1 Introduction 1 2 Related Work 7 2.1 High Dynamic Range Imaging 7 2.1.1 Rejecting Regions with Motions 7 2.1.2 Alignment Before Merging 8 2.1.3 Patch-based Reconstruction 9 2.1.4 Deep-learning-based Methods 9 2.1.5 Single-Image HDRI 10 2.2 Video Super-resolution 11 2.2.1 Deep Single Image Super-resolution 11 2.2.2 Deep Video Super-resolution 12 3 High Dynamic Range Imaging 13 3.1 Motivation 13 3.2 Proposed Method 14 3.2.1 Overall Pipeline 14 3.2.2 Alignment Network 15 3.2.3 Merging Network 19 3.2.4 Integrated HDR imaging network 20 3.3 Datasets 21 3.3.1 Kalantari Dataset and Ground Truth Aligned Images 21 3.3.2 Preprocessing 21 3.3.3 Patch Generation 22 3.4 Experimental Results 23 3.4.1 Evaluation Metrics 23 3.4.2 Ablation Studies 23 3.4.3 Comparisons with State-of-the-Art Methods 25 3.4.4 Application to the Case of More Numbers of Exposures 29 3.4.5 Pre-processing for other HDR imaging methods 32 4 Video Super-resolution 36 4.1 Motivation 36 4.2 Proposed Method 37 4.2.1 Overall Pipeline 37 4.2.2 Alignment Network 38 4.2.3 Reconstruction Network 40 4.2.4 Integrated VSR network 42 4.3 Experimental Results 42 4.3.1 Dataset 42 4.3.2 Ablation Study 42 4.3.3 Capability of DSBN for alignment 44 4.3.4 Comparisons with State-of-the-Art Methods 45 5 Joint HDR and SR 51 5.1 Proposed Method 51 5.1.1 Feature Blending Network 51 5.1.2 Joint HDR-SR Network 51 5.1.3 Existing VSR Network 52 5.1.4 Existing HDR Network 53 5.2 Experimental Results 53 6 Conclusion 58 Abstract (In Korean) 71Docto
    • …
    corecore