4 research outputs found

    Image enhancement methods and applications in computational photography

    Get PDF
    Computational photography is currently a rapidly developing and cutting-edge topic in applied optics, image sensors and image processing fields to go beyond the limitations of traditional photography. The innovations of computational photography allow the photographer not only merely to take an image, but also, more importantly, to perform computations on the captured image data. Good examples of these innovations include high dynamic range imaging, focus stacking, super-resolution, motion deblurring and so on. Although extensive work has been done to explore image enhancement techniques in each subfield of computational photography, attention has seldom been given to study of the image enhancement technique of simultaneously extending depth of field and dynamic range of a scene. In my dissertation, I present an algorithm which combines focus stacking and high dynamic range (HDR) imaging in order to produce an image with both extended depth of field (DOF) and dynamic range than any of the input images. In this dissertation, I also investigate super-resolution image restoration from multiple images, which are possibly degraded by large motion blur. The proposed algorithm combines the super-resolution problem and blind image deblurring problem in a unified framework. The blur kernel for each input image is separately estimated. I also do not make any restrictions on the motion fields among images; that is, I estimate dense motion field without simplifications such as parametric motion. While the proposed super-resolution method uses multiple images to enhance spatial resolution from multiple regular images, single image super-resolution is related to techniques of denoising or removing blur from one single captured image. In my dissertation, space-varying point spread function (PSF) estimation and image deblurring for single image is also investigated. Regarding the PSF estimation, I do not make any restrictions on the type of blur or how the blur varies spatially. Once the space-varying PSF is estimated, space-varying image deblurring is performed, which produces good results even for regions where it is not clear what the correct PSF is at first. I also bring image enhancement applications to both personal computer (PC) and Android platform as computational photography applications

    Extending depth of field and dynamic range from differently focused and exposed images

    Get PDF
    WOS: 000371808500011Focus stacking and high dynamic range (HDR) imaging are two paradigms of computational photography. Focus stacking aims to produce an image with greater depth of field (DOF) from a set of images taken with different focus distances; HDR imaging aims to produce an image with higher dynamic range from a set of images taken with different exposure values. In this paper, we present an algorithm which combines focus stacking and HDR imaging in order to produce an image with both extended DOF and dynamic range from a set of differently focused and exposed images. The key step in our algorithm is focus stacking regardless of the differences in exposure values of input images. This step includes photometric and spatial registration of images, and image fusion to produce all-in-focus images. This is followed by HDR radiance estimation and tonemapping. We provide experimental results with real data to illustrate the algorithm.Texas InstrumentsThis work is supported in part by Texas Instruments

    Focus stacking in UAV-based inspection

    Get PDF
    In UAV-based inspection the most common problems are motion blur and focusing issues. These problems are often due to low-light environment, which can be compensated to some extent with shorter exposure times by using larger apertures and luminous lenses. Large apertures lead to limited depth of field and a solution called focus stacking can be used to extent the focal depth. The main goal of this thesis was to find out the feasibility of focus stacking in UAV inspection and a prototype system was designed and implemented. The acquisition system was implemented with an industrial type camera and an electrical liquid polymer lens. The post-processing software was implemented with OpenCV computer vision library because libraries offer the best possibilities to affect the low-level functionality. Three algorithms were chosen for the image registration and three for the image fusion. In addition, improvements to the speed and accuracy of the registration were examined. The implemented system was compared to equivalent open-source applications in each phase and it outperformed those applications in general performance. The most important goal was achieved and the system managed to improve the image data. A sequential acquisition system is not the best option on moving platform due to the perspective changes causing artifacts in image fusion. Also the optical resolution of the liquid lens was not enough for high resolution inspection imaging. However the idea of focus stacking works and the best solution for a mobile platform would be a multi-sensor system capturing the images simultaneously
    corecore