79,330 research outputs found

    Image enhancement methods and applications in computational photography

    Get PDF
    Computational photography is currently a rapidly developing and cutting-edge topic in applied optics, image sensors and image processing fields to go beyond the limitations of traditional photography. The innovations of computational photography allow the photographer not only merely to take an image, but also, more importantly, to perform computations on the captured image data. Good examples of these innovations include high dynamic range imaging, focus stacking, super-resolution, motion deblurring and so on. Although extensive work has been done to explore image enhancement techniques in each subfield of computational photography, attention has seldom been given to study of the image enhancement technique of simultaneously extending depth of field and dynamic range of a scene. In my dissertation, I present an algorithm which combines focus stacking and high dynamic range (HDR) imaging in order to produce an image with both extended depth of field (DOF) and dynamic range than any of the input images. In this dissertation, I also investigate super-resolution image restoration from multiple images, which are possibly degraded by large motion blur. The proposed algorithm combines the super-resolution problem and blind image deblurring problem in a unified framework. The blur kernel for each input image is separately estimated. I also do not make any restrictions on the motion fields among images; that is, I estimate dense motion field without simplifications such as parametric motion. While the proposed super-resolution method uses multiple images to enhance spatial resolution from multiple regular images, single image super-resolution is related to techniques of denoising or removing blur from one single captured image. In my dissertation, space-varying point spread function (PSF) estimation and image deblurring for single image is also investigated. Regarding the PSF estimation, I do not make any restrictions on the type of blur or how the blur varies spatially. Once the space-varying PSF is estimated, space-varying image deblurring is performed, which produces good results even for regions where it is not clear what the correct PSF is at first. I also bring image enhancement applications to both personal computer (PC) and Android platform as computational photography applications

    Multidimensional image enhancement from a set of unregistered differently exposed images

    Get PDF
    If multiple images of a scene are available instead of a single image, we can use the additional information conveyed by the set of images to generate a higher quality image. This can be done along multiple dimensions. Super-resolution algorithms use a set of shifted and rotated low resolution images to create a high resolution image. High dynamic range imaging techniques combine images with different exposure times to generate an image with a higher dynamic range. In this paper, we present a novel method to combine both techniques and construct a high resolution, high dynamic range image from a set of shifted images with varying exposure times. We first estimate the camera response function, and convert each of the input images to an exposure invariant space. Next, we estimate the motion between the input images. Finally, we reconstruct a high resolution, high dynamic range image using an interpolation from the non-uniformly sampled pixels. Applications of such an approach can be found in various domains, such as surveillance cameras, consumer digital cameras, etc

    Deep Radio Interferometric Imaging with POLISH: DSA-2000 and weak lensing

    Full text link
    Radio interferometry allows astronomers to probe small spatial scales that are often inaccessible with single-dish instruments. However, recovering the radio sky from an interferometer is an ill-posed deconvolution problem that astronomers have worked on for half a century. More challenging still is achieving resolution below the array's diffraction limit, known as super-resolution imaging. To this end, we have developed a new learning-based approach for radio interferometric imaging, leveraging recent advances in the classical computer vision problems of single-image super-resolution (SISR) and deconvolution. We have developed and trained a high dynamic range residual neural network to learn the mapping between the dirty image and the true radio sky. We call this procedure POLISH, in contrast to the traditional CLEAN algorithm. The feed forward nature of learning-based approaches like POLISH is critical for analyzing data from the upcoming Deep Synoptic Array (DSA-2000). We show that POLISH achieves super-resolution, and we demonstrate its ability to deconvolve real observations from the Very Large Array (VLA). Super-resolution on DSA-2000 will allow us to measure the shapes and orientations of several hundred million star forming radio galaxies (SFGs), making it a powerful cosmological weak lensing survey and probe of dark energy. We forecast its ability to constrain the lensing power spectrum, finding that it will be complementary to next-generation optical surveys such as Euclid

    Adaptive foveated single-pixel imaging with dynamic super-sampling

    Get PDF
    As an alternative to conventional multi-pixel cameras, single-pixel cameras enable images to be recorded using a single detector that measures the correlations between the scene and a set of patterns. However, to fully sample a scene in this way requires at least the same number of correlation measurements as there are pixels in the reconstructed image. Therefore single-pixel imaging systems typically exhibit low frame-rates. To mitigate this, a range of compressive sensing techniques have been developed which rely on a priori knowledge of the scene to reconstruct images from an under-sampled set of measurements. In this work we take a different approach and adopt a strategy inspired by the foveated vision systems found in the animal kingdom - a framework that exploits the spatio-temporal redundancy present in many dynamic scenes. In our single-pixel imaging system a high-resolution foveal region follows motion within the scene, but unlike a simple zoom, every frame delivers new spatial information from across the entire field-of-view. Using this approach we demonstrate a four-fold reduction in the time taken to record the detail of rapidly evolving features, whilst simultaneously accumulating detail of more slowly evolving regions over several consecutive frames. This tiered super-sampling technique enables the reconstruction of video streams in which both the resolution and the effective exposure-time spatially vary and adapt dynamically in response to the evolution of the scene. The methods described here can complement existing compressive sensing approaches and may be applied to enhance a variety of computational imagers that rely on sequential correlation measurements.Comment: 13 pages, 5 figure

    ํŠน์ง• ํ˜ผํ•ฉ ๋„คํŠธ์›Œํฌ๋ฅผ ์ด์šฉํ•œ ์˜์ƒ ์ •ํ•ฉ ๊ธฐ๋ฒ•๊ณผ ๊ณ  ๋ช…์•”๋น„ ์˜์ƒ๋ฒ• ๋ฐ ๋น„๋””์˜ค ๊ณ  ํ•ด์ƒํ™”์—์„œ์˜ ์‘์šฉ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2020. 8. ์กฐ๋‚จ์ต.This dissertation presents a deep end-to-end network for high dynamic range (HDR) imaging of dynamic scenes with background and foreground motions. Generating an HDR image from a sequence of multi-exposure images is a challenging process when the images have misalignments by being taken in a dynamic situation. Hence, recent methods first align the multi-exposure images to the reference by using patch matching, optical flow, homography transformation, or attention module before the merging. In this dissertation, a deep network that synthesizes the aligned images as a result of blending the information from multi-exposure images is proposed, because explicitly aligning photos with different exposures is inherently a difficult problem. Specifically, the proposed network generates under/over-exposure images that are structurally aligned to the reference, by blending all the information from the dynamic multi-exposure images. The primary idea is that blending two images in the deep-feature-domain is effective for synthesizing multi-exposure images that are structurally aligned to the reference, resulting in better-aligned images than the pixel-domain blending or geometric transformation methods. Specifically, the proposed alignment network consists of a two-way encoder for extracting features from two images separately, several convolution layers for blending deep features, and a decoder for constructing the aligned images. The proposed network is shown to generate the aligned images with a wide range of exposure differences very well and thus can be effectively used for the HDR imaging of dynamic scenes. Moreover, by adding a simple merging network after the alignment network and training the overall system end-to-end, a performance gain compared to the recent state-of-the-art methods is obtained. This dissertation also presents a deep end-to-end network for video super-resolution (VSR) of frames with motions. To reconstruct an HR frame from a sequence of adjacent frames is a challenging process when the images have misalignments. Hence, recent methods first align the adjacent frames to the reference by using optical flow or adding spatial transformer network (STN). In this dissertation, a deep network that synthesizes the aligned frames as a result of blending the information from adjacent frames is proposed, because explicitly aligning frames is inherently a difficult problem. Specifically, the proposed network generates adjacent frames that are structurally aligned to the reference, by blending all the information from the neighbor frames. The primary idea is that blending two images in the deep-feature-domain is effective for synthesizing frames that are structurally aligned to the reference, resulting in better-aligned images than the pixel-domain blending or geometric transformation methods. Specifically, the proposed alignment network consists of a two-way encoder for extracting features from two images separately, several convolution layers for blending deep features, and a decoder for constructing the aligned images. The proposed network is shown to generate the aligned frames very well and thus can be effectively used for the VSR. Moreover, by adding a simple reconstruction network after the alignment network and training the overall system end-to-end, A performance gain compared to the recent state-of-the-art methods is obtained. In addition to each HDR imaging and VSR network, this dissertation presents a deep end-to-end network for joint HDR-SR of dynamic scenes with background and foreground motions. The proposed HDR imaging and VSR networks enhace the dynamic range and the resolution of images, respectively. However, they can be enhanced simultaneously by a single network. In this dissertation, the network which has same structure of the proposed VSR network is proposed. The network is shown to reconstruct the final results which have higher dynamic range and resolution. It is compared with several methods designed with existing HDR imaging and VSR networks, and shows both qualitatively and quantitatively better results.๋ณธ ํ•™์œ„๋…ผ๋ฌธ์€ ๋ฐฐ๊ฒฝ ๋ฐ ์ „๊ฒฝ์˜ ์›€์ง์ž„์ด ์žˆ๋Š” ์ƒํ™ฉ์—์„œ ๊ณ  ๋ช…์•”๋น„ ์˜์ƒ๋ฒ•์„ ์œ„ํ•œ ๋”ฅ ๋Ÿฌ๋‹ ๋„คํŠธ์›Œํฌ๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ์›€์ง์ž„์ด ์žˆ๋Š” ์ƒํ™ฉ์—์„œ ์ดฌ์˜๋œ ๋…ธ์ถœ์ด ๋‹ค๋ฅธ ์—ฌ๋Ÿฌ ์˜ ์ƒ๋“ค์„ ์ด์šฉํ•˜์—ฌ ๊ณ  ๋ช…์•”๋น„ ์˜์ƒ์„ ์ƒ์„ฑํ•˜๋Š” ๊ฒƒ์€ ๋งค์šฐ ์–ด๋ ค์šด ์ž‘์—…์ด๋‹ค. ๊ทธ๋ ‡๊ธฐ ๋•Œ๋ฌธ์—, ์ตœ๊ทผ์— ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•๋“ค์€ ์ด๋ฏธ์ง€๋“ค์„ ํ•ฉ์„ฑํ•˜๊ธฐ ์ „์— ํŒจ์น˜ ๋งค์นญ, ์˜ตํ‹ฐ์ปฌ ํ”Œ๋กœ์šฐ, ํ˜ธ๋ชจ๊ทธ๋ž˜ํ”ผ ๋ณ€ํ™˜ ๋“ฑ์„ ์ด์šฉํ•˜์—ฌ ๊ทธ ์ด๋ฏธ์ง€๋“ค์„ ๋จผ์ € ์ •๋ ฌํ•œ๋‹ค. ์‹ค์ œ๋กœ ๋…ธ์ถœ ์ •๋„๊ฐ€ ๋‹ค๋ฅธ ์—ฌ๋Ÿฌ ์ด๋ฏธ์ง€๋“ค์„ ์ •๋ ฌํ•˜๋Š” ๊ฒƒ์€ ์•„์ฃผ ์–ด๋ ค์šด ์ž‘์—…์ด๊ธฐ ๋•Œ๋ฌธ์—, ์ด ๋…ผ๋ฌธ์—์„œ๋Š” ์—ฌ๋Ÿฌ ์ด๋ฏธ์ง€๋“ค๋กœ๋ถ€ํ„ฐ ์–ป์€ ์ •๋ณด๋ฅผ ์„ž์–ด์„œ ์ •๋ ฌ๋œ ์ด๋ฏธ์ง€๋ฅผ ํ•ฉ์„ฑํ•˜๋Š” ๋„คํŠธ์›Œํฌ๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ํŠนํžˆ, ์ œ์•ˆํ•˜๋Š” ๋„คํŠธ์›Œํฌ๋Š” ๋” ๋ฐ๊ฒŒ ํ˜น์€ ์–ด๋‘ก๊ฒŒ ์ดฌ์˜๋œ ์ด๋ฏธ์ง€๋“ค์„ ์ค‘๊ฐ„ ๋ฐ๊ธฐ๋กœ ์ดฌ์˜๋œ ์ด๋ฏธ์ง€๋ฅผ ๊ธฐ์ค€์œผ๋กœ ์ •๋ ฌํ•œ๋‹ค. ์ฃผ์š”ํ•œ ์•„์ด๋””์–ด๋Š” ์ •๋ ฌ๋œ ์ด๋ฏธ์ง€๋ฅผ ํ•ฉ์„ฑํ•  ๋•Œ ํŠน์ง• ๋„๋ฉ”์ธ์—์„œ ํ•ฉ์„ฑํ•˜๋Š” ๊ฒƒ์ด๋ฉฐ, ์ด๋Š” ํ”ฝ์…€ ๋„๋ฉ”์ธ์—์„œ ํ•ฉ์„ฑํ•˜๊ฑฐ๋‚˜ ๊ธฐํ•˜ํ•™์  ๋ณ€ํ™˜์„ ์ด์šฉํ•  ๋•Œ ๋ณด๋‹ค ๋” ์ข‹์€ ์ •๋ ฌ ๊ฒฐ๊ณผ๋ฅผ ๊ฐ–๋Š”๋‹ค. ํŠนํžˆ, ์ œ์•ˆํ•˜๋Š” ์ •๋ ฌ ๋„คํŠธ์›Œํฌ๋Š” ๋‘ ๊ฐˆ๋ž˜์˜ ์ธ์ฝ”๋”์™€ ์ปจ๋ณผ๋ฃจ์…˜ ๋ ˆ์ด์–ด๋“ค ๊ทธ๋ฆฌ๊ณ  ๋””์ฝ”๋”๋กœ ์ด๋ฃจ์–ด์ ธ ์žˆ๋‹ค. ์ธ์ฝ”๋”๋“ค์€ ๋‘ ์ž…๋ ฅ ์ด๋ฏธ์ง€๋กœ๋ถ€ํ„ฐ ํŠน์ง•์„ ์ถ”์ถœํ•˜๊ณ , ์ปจ๋ณผ๋ฃจ์…˜ ๋ ˆ์ด์–ด๋“ค์ด ์ด ํŠน์ง•๋“ค์„ ์„ž๋Š”๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ ๋””์ฝ”๋”์—์„œ ์ •๋ ฌ๋œ ์ด๋ฏธ์ง€๋ฅผ ์ƒ์„ฑํ•œ๋‹ค. ์ œ์•ˆํ•˜๋Š” ๋„คํŠธ์›Œํฌ๋Š” ๊ณ  ๋ช…์•”๋น„ ์˜์ƒ๋ฒ•์—์„œ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ๋„๋ก ๋…ธ์ถœ ์ •๋„๊ฐ€ ํฌ๊ฒŒ ์ฐจ์ด๋‚˜๋Š” ์˜์ƒ์—์„œ๋„ ์ž˜ ์ž‘๋™ํ•œ๋‹ค. ๊ฒŒ๋‹ค๊ฐ€, ๊ฐ„๋‹จํ•œ ๋ณ‘ํ•ฉ ๋„คํŠธ์›Œํฌ๋ฅผ ์ถ”๊ฐ€ํ•˜๊ณ  ์ „์ฒด ๋„คํŠธ์›Œํฌ๋“ค์„ ํ•œ ๋ฒˆ์— ํ•™์Šตํ•จ์œผ๋กœ์„œ, ์ตœ๊ทผ์— ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•๋“ค ๋ณด๋‹ค ๋” ์ข‹์€ ์„ฑ๋Šฅ์„ ๊ฐ–๋Š”๋‹ค. ๋˜ํ•œ, ๋ณธ ํ•™์œ„๋…ผ๋ฌธ์€ ๋™์˜์ƒ ๋‚ด ํ”„๋ ˆ์ž„๋“ค์„ ์ด์šฉํ•˜๋Š” ๋น„๋””์˜ค ๊ณ  ํ•ด์ƒํ™” ๋ฐฉ๋ฒ•์„ ์œ„ํ•œ ๋”ฅ ๋Ÿฌ๋‹ ๋„คํŠธ์›Œํฌ๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ๋™์˜์ƒ ๋‚ด ์ธ์ ‘ํ•œ ํ”„๋ ˆ์ž„๋“ค ์‚ฌ์ด์—๋Š” ์›€์ง์ž„์ด ์กด์žฌํ•˜๊ธฐ ๋•Œ๋ฌธ์—, ์ด๋“ค์„ ์ด์šฉํ•˜์—ฌ ๊ณ  ํ•ด์ƒ๋„์˜ ํ”„๋ ˆ์ž„์„ ํ•ฉ์„ฑํ•˜๋Š” ๊ฒƒ์€ ์•„์ฃผ ์–ด๋ ค์šด ์ž‘์—…์ด๋‹ค. ๋”ฐ๋ผ์„œ, ์ตœ๊ทผ์— ์ œ์•ˆ๋œ ๋ฐฉ๋ฒ•๋“ค์€ ์ด ์ธ์ ‘ํ•œ ํ”„๋ ˆ์ž„๋“ค์„ ์ •๋ ฌํ•˜๊ธฐ ์œ„ํ•ด ์˜ตํ‹ฐ์ปฌ ํ”Œ๋กœ์šฐ๋ฅผ ๊ณ„์‚ฐํ•˜๊ฑฐ๋‚˜ STN์„ ์ถ”๊ฐ€ํ•œ๋‹ค. ์›€์ง์ž„์ด ์กด์žฌํ•˜๋Š” ํ”„๋ ˆ์ž„๋“ค์„ ์ •๋ ฌํ•˜๋Š” ๊ฒƒ์€ ์–ด๋ ค์šด ๊ณผ์ •์ด๊ธฐ ๋•Œ๋ฌธ์—, ์ด ๋…ผ๋ฌธ์—์„œ๋Š” ์ธ์ ‘ํ•œ ํ”„๋ ˆ์ž„๋“ค๋กœ๋ถ€ํ„ฐ ์–ป์€ ์ •๋ณด๋ฅผ ์„ž์–ด์„œ ์ •๋ ฌ๋œ ํ”„๋ ˆ์ž„์„ ํ•ฉ์„ฑํ•˜๋Š” ๋„คํŠธ์›Œํฌ๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ํŠนํžˆ, ์ œ์•ˆํ•˜๋Š” ๋„คํŠธ์›Œํฌ๋Š” ์ด์›ƒํ•œ ํ”„๋ ˆ์ž„๋“ค์„ ๋ชฉํ‘œ ํ”„๋ ˆ์ž„์„ ๊ธฐ์ค€์œผ๋กœ ์ •๋ ฌํ•œ๋‹ค. ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ์ฃผ์š” ์•„์ด๋””์–ด๋Š” ์ •๋ ฌ๋œ ํ”„๋ ˆ์ž„์„ ํ•ฉ์„ฑํ•  ๋•Œ ํŠน์ง• ๋„๋ฉ”์ธ์—์„œ ํ•ฉ์„ฑํ•˜๋Š” ๊ฒƒ์ด๋‹ค. ์ด๋Š” ํ”ฝ์…€ ๋„๋ฉ”์ธ์—์„œ ํ•ฉ์„ฑํ•˜๊ฑฐ๋‚˜ ๊ธฐํ•˜ํ•™์  ๋ณ€ํ™˜์„ ์ด์šฉํ•  ๋•Œ ๋ณด๋‹ค ๋” ์ข‹์€ ์ •๋ ฌ ๊ฒฐ๊ณผ๋ฅผ ๊ฐ–๋Š”๋‹ค. ํŠนํžˆ, ์ œ์•ˆํ•˜๋Š” ์ •๋ ฌ ๋„คํŠธ์›Œํฌ๋Š” ๋‘ ๊ฐˆ๋ž˜์˜ ์ธ์ฝ”๋”์™€ ์ปจ๋ณผ๋ฃจ์…˜ ๋ ˆ์ด์–ด๋“ค ๊ทธ๋ฆฌ๊ณ  ๋””์ฝ”๋”๋กœ ์ด๋ฃจ์–ด์ ธ ์žˆ๋‹ค. ์ธ์ฝ”๋”๋“ค์€ ๋‘ ์ž…๋ ฅ ํ”„๋ ˆ์ž„์œผ๋กœ๋ถ€ํ„ฐ ํŠน์ง•์„ ์ถ”์ถœํ•˜๊ณ , ์ปจ๋ณผ๋ฃจ์…˜ ๋ ˆ์ด์–ด๋“ค์ด ์ด ํŠน์ง•๋“ค์„ ์„ž๋Š”๋‹ค. ๋งˆ์ง€๋ง‰์œผ๋กœ ๋””์ฝ”๋”์—์„œ ์ •๋ ฌ๋œ ํ”„๋ ˆ์ž„์„ ์ƒ์„ฑํ•œ๋‹ค. ์ œ์•ˆํ•˜๋Š” ๋„คํŠธ์›Œํฌ๋Š” ์ธ์ ‘ํ•œ ํ”„๋ ˆ์ž„๋“ค์„ ์ž˜ ์ •๋ ฌํ•˜๋ฉฐ, ๋น„๋””์˜ค ๊ณ  ํ•ด์ƒํ™”์— ํšจ๊ณผ์ ์œผ๋กœ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ๋‹ค. ๊ฒŒ๋‹ค๊ฐ€ ๋ณ‘ํ•ฉ ๋„คํŠธ์›Œํฌ๋ฅผ ์ถ”๊ฐ€ํ•˜๊ณ  ์ „์ฒด ๋„คํŠธ์›Œํฌ๋“ค์„ ํ•œ ๋ฒˆ์— ํ•™์Šตํ•จ์œผ๋กœ์„œ, ์ตœ๊ทผ์— ์ œ์•ˆ๋œ ์—ฌ๋Ÿฌ ๋ฐฉ๋ฒ•๋“ค ๋ณด๋‹ค ๋” ์ข‹์€ ์„ฑ๋Šฅ์„ ๊ฐ–๋Š”๋‹ค. ๊ณ  ๋ช…์•”๋น„ ์˜์ƒ๋ฒ•๊ณผ ๋น„๋””์˜ค ๊ณ  ํ•ด์ƒํ™”์— ๋”ํ•˜์—ฌ, ๋ณธ ํ•™์œ„๋…ผ๋ฌธ์€ ๋ช…์•”๋น„์™€ ํ•ด์ƒ๋„๋ฅผ ํ•œ ๋ฒˆ์— ํ–ฅ์ƒ์‹œํ‚ค๋Š” ๋”ฅ ๋„คํŠธ์›Œํฌ๋ฅผ ์ œ์•ˆํ•œ๋‹ค. ์•ž์—์„œ ์ œ์•ˆ๋œ ๋‘ ๋„คํŠธ์›Œํฌ๋“ค์€ ๊ฐ๊ฐ ๋ช…์•”๋น„์™€ ํ•ด์ƒ๋„๋ฅผ ํ–ฅ์ƒ์‹œํ‚จ๋‹ค. ํ•˜์ง€๋งŒ, ๊ทธ๋“ค์€ ํ•˜๋‚˜์˜ ๋„คํŠธ์›Œํฌ๋ฅผ ํ†ตํ•ด ํ•œ ๋ฒˆ์— ํ–ฅ์ƒ๋  ์ˆ˜ ์žˆ๋‹ค. ์ด ๋…ผ๋ฌธ์—์„œ๋Š” ๋น„๋””์˜ค ๊ณ ํ•ด์ƒํ™”๋ฅผ ์œ„ํ•ด ์ œ์•ˆํ•œ ๋„คํŠธ์›Œํฌ์™€ ๊ฐ™์€ ๊ตฌ์กฐ์˜ ๋„คํŠธ์›Œํฌ๋ฅผ ์ด์šฉํ•˜๋ฉฐ, ๋” ๋†’์€ ๋ช…์•”๋น„์™€ ํ•ด์ƒ๋„๋ฅผ ๊ฐ–๋Š” ์ตœ์ข… ๊ฒฐ๊ณผ๋ฅผ ์ƒ์„ฑํ•ด๋‚ผ ์ˆ˜ ์žˆ๋‹ค. ์ด ๋ฐฉ๋ฒ•์€ ๊ธฐ์กด์˜ ๊ณ  ๋ช…์•”๋น„ ์˜์ƒ๋ฒ•๊ณผ ๋น„๋””์˜ค ๊ณ ํ•ด์ƒํ™”๋ฅผ ์œ„ํ•œ ๋„คํŠธ์›Œํฌ๋“ค์„ ์กฐํ•ฉํ•˜๋Š” ๊ฒƒ ๋ณด๋‹ค ์ •์„ฑ์ ์œผ๋กœ ๊ทธ๋ฆฌ๊ณ  ์ •๋Ÿ‰์ ์œผ๋กœ ๋” ์ข‹์€ ๊ฒฐ๊ณผ๋ฅผ ๋งŒ๋“ค์–ด ๋‚ธ๋‹ค.1 Introduction 1 2 Related Work 7 2.1 High Dynamic Range Imaging 7 2.1.1 Rejecting Regions with Motions 7 2.1.2 Alignment Before Merging 8 2.1.3 Patch-based Reconstruction 9 2.1.4 Deep-learning-based Methods 9 2.1.5 Single-Image HDRI 10 2.2 Video Super-resolution 11 2.2.1 Deep Single Image Super-resolution 11 2.2.2 Deep Video Super-resolution 12 3 High Dynamic Range Imaging 13 3.1 Motivation 13 3.2 Proposed Method 14 3.2.1 Overall Pipeline 14 3.2.2 Alignment Network 15 3.2.3 Merging Network 19 3.2.4 Integrated HDR imaging network 20 3.3 Datasets 21 3.3.1 Kalantari Dataset and Ground Truth Aligned Images 21 3.3.2 Preprocessing 21 3.3.3 Patch Generation 22 3.4 Experimental Results 23 3.4.1 Evaluation Metrics 23 3.4.2 Ablation Studies 23 3.4.3 Comparisons with State-of-the-Art Methods 25 3.4.4 Application to the Case of More Numbers of Exposures 29 3.4.5 Pre-processing for other HDR imaging methods 32 4 Video Super-resolution 36 4.1 Motivation 36 4.2 Proposed Method 37 4.2.1 Overall Pipeline 37 4.2.2 Alignment Network 38 4.2.3 Reconstruction Network 40 4.2.4 Integrated VSR network 42 4.3 Experimental Results 42 4.3.1 Dataset 42 4.3.2 Ablation Study 42 4.3.3 Capability of DSBN for alignment 44 4.3.4 Comparisons with State-of-the-Art Methods 45 5 Joint HDR and SR 51 5.1 Proposed Method 51 5.1.1 Feature Blending Network 51 5.1.2 Joint HDR-SR Network 51 5.1.3 Existing VSR Network 52 5.1.4 Existing HDR Network 53 5.2 Experimental Results 53 6 Conclusion 58 Abstract (In Korean) 71Docto

    Acoustical structured illumination for super-resolution ultrasound imaging.

    Get PDF
    Structured illumination microscopy is an optical method to increase the spatial resolution of wide-field fluorescence imaging beyond the diffraction limit by applying a spatially structured illumination light. Here, we extend this concept to facilitate super-resolution ultrasound imaging by manipulating the transmitted sound field to encode the high spatial frequencies into the observed image through aliasing. Post processing is applied to precisely shift the spectral components to their proper positions in k-space and effectively double the spatial resolution of the reconstructed image compared to one-way focusing. The method has broad application, including the detection of small lesions for early cancer diagnosis, improving the detection of the borders of organs and tumors, and enhancing visualization of vascular features. The method can be implemented with conventional ultrasound systems, without the need for additional components. The resulting image enhancement is demonstrated with both test objects and ex vivo rat metacarpals and phalanges

    Computational localization microscopy with extended axial range

    Get PDF
    A new single-aperture 3D particle-localization and tracking technique is presented that demonstrates an increase in depth range by more than an order of magnitude without compromising optical resolution and throughput. We exploit the extended depth range and depth-dependent translation of an Airy-beam PSF for 3D localization over an extended volume in a single snapshot. The technique is applicable to all bright-field and fluorescence modalities for particle localization and tracking, ranging from super-resolution microscopy through to the tracking of fluorescent beads and endogenous particles within cells. We demonstrate and validate its application to real-time 3D velocity imaging of fluid flow in capillaries using fluorescent tracer beads. An axial localization precision of 50 nm was obtained over a depth range of 120ฮผm using a 0.4NA, 20ร— microscope objective. We believe this to be the highest ratio of axial range-to-precision reported to date

    Complementarity of PALM and SOFI for super-resolution live cell imaging of focal adhesions

    Get PDF
    Live cell imaging of focal adhesions requires a sufficiently high temporal resolution, which remains a challenging task for super-resolution microscopy. We have addressed this important issue by combining photo-activated localization microscopy (PALM) with super-resolution optical fluctuation imaging (SOFI). Using simulations and fixed cell focal adhesion images, we investigated the complementarity between PALM and SOFI in terms of spatial and temporal resolution. This PALM-SOFI framework was used to image focal adhesions in living cells, while obtaining a temporal resolution below 10 s. We visualized the dynamics of focal adhesions, and revealed local mean velocities around 190 nm per minute. The complementarity of PALM and SOFI was assessed in detail with a methodology that integrates a quantitative resolution and signal-to-noise metric. This PALM and SOFI concept provides an enlarged quantitative imaging framework, allowing unprecedented functional exploration of focal adhesions through the estimation of molecular parameters such as the fluorophore density and the photo-activation and photo-switching rates
    • โ€ฆ
    corecore