15 research outputs found

    Towards Interpretable Video Super-Resolution via Alternating Optimization

    Full text link
    In this paper, we study a practical space-time video super-resolution (STVSR) problem which aims at generating a high-framerate high-resolution sharp video from a low-framerate low-resolution blurry video. Such problem often occurs when recording a fast dynamic event with a low-framerate and low-resolution camera, and the captured video would suffer from three typical issues: i) motion blur occurs due to object/camera motions during exposure time; ii) motion aliasing is unavoidable when the event temporal frequency exceeds the Nyquist limit of temporal sampling; iii) high-frequency details are lost because of the low spatial sampling rate. These issues can be alleviated by a cascade of three separate sub-tasks, including video deblurring, frame interpolation, and super-resolution, which, however, would fail to capture the spatial and temporal correlations among video sequences. To address this, we propose an interpretable STVSR framework by leveraging both model-based and learning-based methods. Specifically, we formulate STVSR as a joint video deblurring, frame interpolation, and super-resolution problem, and solve it as two sub-problems in an alternate way. For the first sub-problem, we derive an interpretable analytical solution and use it as a Fourier data transform layer. Then, we propose a recurrent video enhancement layer for the second sub-problem to further recover high-frequency details. Extensive experiments demonstrate the superiority of our method in terms of quantitative metrics and visual quality.Comment: ECCV 202

    Aplicação de técnicas de inteligência artificial na detecção do comportamento tipo-depressivo em camundongos através do teste de suspensão pela cauda

    Get PDF
    TCC(graduação) - Universidade Federal de Santa Catarina. Campus Araranguá. Engenharia da Computação.Segundo a Organização Mundial da Saúde, há pelo menos 300 milhões de pessoas com depressão no mundo; estimativas mostraram acréscimo de 15% de 2005 até 2015. Além das terapias psicológicas, pacientes sob uso de medicamentos apresentam melhora do quadro clínico. Estes medicamentos são avaliados por uma bateria de testes pré-clínicos para serem inseridos no mercado. Dentre os testes utilizados para a avaliação de candidatos para o controle da depressão, destaca-se o teste de suspensão pela cauda, objeto de estudo deste trabalho. Estes testes são realizados com camundongos, e requerem da percepção humana para a elaboração dos resultados quantitativos do teste. Entretanto, o fator humano é um agravante, apresentando variações dentre análises de um mesmo avaliador, ou para com outro. Neste trabalho, foi proposta a aplicação de um conjunto de técnicas de inteligência artificial em imagens do teste de suspensão pela cauda. Estas técnicas classificaram os movimentos dos camundongos e geraram resultados nos padrões utilizados em neste tipo pesquisa. Para o caso estudado, os resultados obtidos atingiram valores superiores a 90% de precisão em relação ao pesquisador usado como referência. Em relação com a detecção das patas, foram encontradas 87,7% das possíveis patas dentro do conjunto teste.According to the World Health Organization, there are at least 300 million people with depression in the world; estimates show an increase of 15% from 2005 to 2015. Beyond psychological therapies, patients under the use of drugs present clinical improvements. These drugs are evaluated by a wide set of preliminary tests before being placed on the market. Among the tests used for the evaluation of drugs to control depression, thereisthetailsuspensiontest, objectofstudyofthiswork. These tests are performed with mices, requiring the human perception for the elaboration of the quantitative results of the test. However, the human factor is an aggravating factor, presenting variations between analyzes from the same appraiser, or from another one. In this work, it isproposedasetofartificialintelligencetechniquesappliedtoimagesof tailsuspensiontest. Thesetechniquesareusedtoclassifyitsmovements and generating results with same patterns used in this kind of research. The results obtained reached values greater than 90% accuracy, using the researcher as reference. Regarding paw identification, 87.7% of possible paws were found within a test set

    Dense Corresspondence Estimation for Image Interpolation

    Get PDF
    We evaluate the current state-of-the-art in dense correspondence estimation for the use in multi-image interpolation algorithms. The evaluation is carried out on three real-world scenes and one synthetic scene, each featuring varying challenges for dense correspondence estimation. The primary focus of our study is on the perceptual quality of the interpolation sequences created from the estimated flow fields. Perceptual plausibility is assessed by means of a psychophysical userstudy. Our results show that current state-of-the-art in dense correspondence estimation does not produce visually plausible interpolations.In diesem Bericht evaluieren wir den gegenwärtigen Stand der Technik in dichter Korrespondenzschätzung hinsichtlich der Eignung für die Nutzung in Algorithmen zur Zwischenbildsynthese. Die Auswertung erfolgt auf drei realen und einer synthetischen Szene mit variierenden Herausforderungen für Algorithmen zur Korrespondenzschätzung. Mittels einer perzeptuellen Benutzerstudie werten wir die wahrgenommene Qualität der interpolierten Bildsequenzen aus. Unsere Ergebnisse zeigen dass der gegenwärtige Stand der Technik in dichter Korrespondezschätzung nicht für die Zwischenbildsynthese geeignet ist

    New Datasets, Models, and Optimization

    Get PDF
    학위논문(박사) -- 서울대학교대학원 : 공과대학 전기·정보공학부, 2021.8. 손현태.사진 촬영의 궁극적인 목표는 고품질의 깨끗한 영상을 얻는 것이다. 현실적으로, 일상의 사진은 자주 흔들린 카메라와 움직이는 물체가 있는 동적 환경에서 찍는다. 노출시간 중의 카메라와 피사체간의 상대적인 움직임은 사진과 동영상에서 모션 블러를 일으키며 시각적인 화질을 저하시킨다. 동적 환경에서 블러의 세기와 움직임의 모양은 매 이미지마다, 그리고 매 픽셀마다 다르다. 국지적으로 변화하는 블러의 성질은 사진과 동영상에서의 모션 블러 제거를 심각하게 풀기 어려우며 해답이 하나로 정해지지 않은, 잘 정의되지 않은 문제로 만든다. 물리적인 움직임 모델링을 통해 해석적인 접근법을 설계하기보다는 머신러닝 기반의 접근법은 이러한 잘 정의되지 않은 문제를 푸는 보다 현실적인 답이 될 수 있다. 특히 딥 러닝은 최근 컴퓨터 비전 학계에서 표준적인 기법이 되어 가고 있다. 본 학위논문은 사진 및 비디오 디블러링 문제에 대해 딥 러닝 기반 솔루션을 도입하며 여러 현실적인 문제를 다각적으로 다룬다. 첫 번째로, 디블러링 문제를 다루기 위한 데이터셋을 취득하는 새로운 방법을 제안한다. 모션 블러가 있는 이미지와 깨끗한 이미지를 시간적으로 정렬된 상태로 동시에 취득하는 것은 쉬운 일이 아니다. 데이터가 부족한 경우 디블러링 알고리즘들을 평가하는 것 뿐만 아니라 지도학습 기법을 개발하는 것도 불가능해진다. 그러나 고속 비디오를 사용하여 카메라 영상 취득 파이프라인을 모방하면 실제적인 모션 블러 이미지를 합성하는 것이 가능하다. 기존의 블러 합성 기법들과 달리 제안하는 방법은 여러 움직이는 피사체들과 다양한 영상 깊이, 움직임 경계에서의 가리워짐 등으로 인한 자연스러운 국소적 블러의 복잡도를 반영할 수 있다. 두 번째로, 제안된 데이터셋에 기반하여 새로운 단일영상 디블러링을 위한 뉴럴 네트워크 구조를 제안한다. 최적화기법 기반 이미지 디블러링 방식에서 널리 쓰이고 있는 점차적 미세화 접근법을 반영하여 다중규모 뉴럴 네트워크를 설계한다. 제안된 다중규모 모델은 비슷한 복잡도를 가진 단일규모 모델들보다 높은 복원 정확도를 보인다. 세 번째로, 비디오 디블러링을 위한 순환 뉴럴 네트워크 모델 구조를 제안한다. 디블러링을 통해 고품질의 비디오를 얻기 위해서는 각 프레임간의 시간적인 정보와 프레임 내부적인 정보를 모두 사용해야 한다. 제안하는 내부프레임 반복적 연산구조는 두 정보를 효과적으로 함께 사용함으로써 모델 파라미터 수를 증가시키지 않고도 디블러 정확도를 향상시킨다. 마지막으로, 새로운 디블러링 모델들을 보다 잘 최적화하기 위해 로스 함수를 제안한다. 깨끗하고 또렷한 사진 한 장으로부터 자연스러운 모션 블러를 만들어내는 것은 블러를 제거하는 것과 마찬가지로 어려운 문제이다. 그러나 통상 사용하는 로스 함수로 얻은 디블러링 방법들은 블러를 완전히 제거하지 못하며 디블러된 이미지의 남아있는 블러로부터 원래의 블러를 재건할 수 있다. 제안하는 리블러링 로스 함수는 디블러링 수행시 모션 블러를 보다 잘 제거하도록 설계되었다. 이에 나아가 제안한 자기지도학습 과정으로부터 테스트시 모델이 새로운 데이터에 적응하도록 할 수 있다. 이렇게 제안된 데이터셋, 모델 구조, 그리고 로스 함수를 통해 딥 러닝에 기반하여 단일 영상 및 비디오 디블러링 기법들을 제안한다. 광범위한 실험 결과로부터 정량적 및 정성적으로 최첨단 디블러링 성과를 증명한다.Obtaining a high-quality clean image is the ultimate goal of photography. In practice, daily photography is often taken in dynamic environments with moving objects as well as shaken cameras. The relative motion between the camera and the objects during the exposure causes motion blur in images and videos, degrading the visual quality. The degree of blur strength and the shape of motion trajectory varies by every image and every pixel in dynamic environments. The locally-varying property makes the removal of motion blur in images and videos severely ill-posed. Rather than designing analytic solutions with physical modelings, using machine learning-based approaches can serve as a practical solution for such a highly ill-posed problem. Especially, deep-learning has been the recent standard in computer vision literature. This dissertation introduces deep learning-based solutions for image and video deblurring by tackling practical issues in various aspects. First, a new way of constructing the datasets for dynamic scene deblurring task is proposed. It is nontrivial to simultaneously obtain a pair of the blurry and the sharp image that are temporally aligned. The lack of data prevents the supervised learning techniques to be developed as well as the evaluation of deblurring algorithms. By mimicking the camera image pipeline with high-speed videos, realistic blurry images could be synthesized. In contrast to the previous blur synthesis methods, the proposed approach can reflect the natural complex local blur from and multiple moving objects, varying depth, and occlusion at motion boundaries. Second, based on the proposed datasets, a novel neural network architecture for single-image deblurring task is presented. Adopting the coarse-to-fine approach that is widely used in energy optimization-based methods for image deblurring, a multi-scale neural network architecture is derived. Compared with the single-scale model with similar complexity, the multi-scale model exhibits higher accuracy and faster speed. Third, a light-weight recurrent neural network model architecture for video deblurring is proposed. In order to obtain a high-quality video from deblurring, it is important to exploit the intrinsic information in the target frame as well as the temporal relation between the neighboring frames. Taking benefits from both sides, the proposed intra-frame iterative scheme applied to the RNNs achieves accuracy improvements without increasing the number of model parameters. Lastly, a novel loss function is proposed to better optimize the deblurring models. Estimating a dynamic blur for a clean and sharp image without given motion information is another ill-posed problem. While the goal of deblurring is to completely get rid of motion blur, conventional loss functions fail to train neural networks to fulfill the goal, leaving the trace of blur in the deblurred images. The proposed reblurring loss functions are designed to better eliminate the motion blur and to produce sharper images. Furthermore, the self-supervised learning process facilitates the adaptation of the deblurring model at test-time. With the proposed datasets, model architectures, and the loss functions, the deep learning-based single-image and video deblurring methods are presented. Extensive experimental results demonstrate the state-of-the-art performance both quantitatively and qualitatively.1 Introduction 1 2 Generating Datasets for Dynamic Scene Deblurring 7 2.1 Introduction 7 2.2 GOPRO dataset 9 2.3 REDS dataset 11 2.4 Conclusion 18 3 Deep Multi-Scale Convolutional Neural Networks for Single Image Deblurring 19 3.1 Introduction 19 3.1.1 Related Works 21 3.1.2 Kernel-Free Learning for Dynamic Scene Deblurring 23 3.2 Proposed Method 23 3.2.1 Model Architecture 23 3.2.2 Training 26 3.3 Experiments 29 3.3.1 Comparison on GOPRO Dataset 29 3.3.2 Comparison on Kohler Dataset 33 3.3.3 Comparison on Lai et al. [54] dataset 33 3.3.4 Comparison on Real Dynamic Scenes 34 3.3.5 Effect of Adversarial Loss 34 3.4 Conclusion 41 4 Intra-Frame Iterative RNNs for Video Deblurring 43 4.1 Introduction 43 4.2 Related Works 46 4.3 Proposed Method 50 4.3.1 Recurrent Video Deblurring Networks 51 4.3.2 Intra-Frame Iteration Model 52 4.3.3 Regularization by Stochastic Training 56 4.4 Experiments 58 4.4.1 Datasets 58 4.4.2 Implementation details 59 4.4.3 Comparisons on GOPRO [72] dataset 59 4.4.4 Comparisons on [97] Dataset and Real Videos 60 4.5 Conclusion 61 5 Learning Loss Functions for Image Deblurring 67 5.1 Introduction 67 5.2 Related Works 71 5.3 Proposed Method 73 5.3.1 Clean Images are Hard to Reblur 73 5.3.2 Supervision from Reblurring Loss 75 5.3.3 Test-time Adaptation by Self-Supervision 76 5.4 Experiments 78 5.4.1 Effect of Reblurring Loss 78 5.4.2 Effect of Sharpness Preservation Loss 80 5.4.3 Comparison with Other Perceptual Losses 81 5.4.4 Effect of Test-time Adaptation 81 5.4.5 Comparison with State-of-The-Art Methods 82 5.4.6 Real World Image Deblurring 85 5.4.7 Combining Reblurring Loss with Other Perceptual Losses 86 5.4.8 Perception vs. Distortion Trade-Off 87 5.4.9 Visual Comparison of Loss Function 88 5.4.10 Implementation Details 89 5.4.11 Determining Reblurring Module Size 94 5.5 Conclusion 95 6 Conclusion 97 국문 초록 115 감사의 글 117박

    Modeling the Performance of Image Restoration From Motion Blur

    Full text link

    Optimization techniques for computationally expensive rendering algorithms

    Get PDF
    Realistic rendering in computer graphics simulates the interactions of light and surfaces. While many accurate models for surface reflection and lighting, including solid surfaces and participating media have been described; most of them rely on intensive computation. Common practices such as adding constraints and assumptions can increase performance. However, they may compromise the quality of the resulting images or the variety of phenomena that can be accurately represented. In this thesis, we will focus on rendering methods that require high amounts of computational resources. Our intention is to consider several conceptually different approaches capable of reducing these requirements with only limited implications in the quality of the results. The first part of this work will study rendering of time-­¿varying participating media. Examples of this type of matter are smoke, optically thick gases and any material that, unlike the vacuum, scatters and absorbs the light that travels through it. We will focus on a subset of algorithms that approximate realistic illumination using images of real world scenes. Starting from the traditional ray marching algorithm, we will suggest and implement different optimizations that will allow performing the computation at interactive frame rates. This thesis will also analyze two different aspects of the generation of anti-­¿aliased images. One targeted to the rendering of screen-­¿space anti-­¿aliased images and the reduction of the artifacts generated in rasterized lines and edges. We expect to describe an implementation that, working as a post process, it is efficient enough to be added to existing rendering pipelines with reduced performance impact. A third method will take advantage of the limitations of the human visual system (HVS) to reduce the resources required to render temporally antialiased images. While film and digital cameras naturally produce motion blur, rendering pipelines need to explicitly simulate it. This process is known to be one of the most important burdens for every rendering pipeline. Motivated by this, we plan to run a series of psychophysical experiments targeted at identifying groups of motion-­¿blurred images that are perceptually equivalent. A possible outcome is the proposal of criteria that may lead to reductions of the rendering budgets

    Flexible algae photo-bioreactors in water waves.

    Get PDF
    Master of Science in Civil Engineering. University of KwaZulu-Natal, Durban 2016.Abstract available in PDF file
    corecore