11 research outputs found
Recommended from our members
Depth and Deblurring from a Spectrally-varying Depth-of-Field
We propose modifying the aperture of a conventional color camera so that the effective aperture size for one color channel is smaller than that for the other two. This produces an image where different color channels have different depths-of-field, and from this we can computationally recover scene depth, reconstruct an all-focus image and achieve synthetic re-focusing, all from a single shot. These capabilities are enabled by a spatio-spectral image model that encodes the statistical relationship between gradient profiles across color channels. This approach substantially improves depth accuracy over alternative single-shot coded-aperture designs, and since it avoids introducing additional spatial distortions and is light efficient, it allows high-quality deblurring and lower exposure times. We demonstrate these benefits with comparisons on synthetic data, as well as results on images captured with a prototype lens.Engineering and Applied Science
On the appearance of translucent edges
Edges in images of translucent objects are very different from edges in images of opaque objects. The physical causes for these differences are hard to characterize analytically and are not well understood. This paper considers one class of translucency edges - those caused by a discontinuity in surface orientation - and describes the physical causes of their appearance. We simulate thousands of translucency edge profiles using many different scattering material parameters, and we explain the resulting variety of edge patterns by qualitatively analyzing light transport. We also discuss the existence of shape and material metamers, or combinations of distinct shape or material parameters that generate the same edge profile. This knowledge is relevant to visual inference tasks that involve translucent objects, such as shape or material estimation.National Science Foundation (U.S.) (IIS 1161564)National Science Foundation (U.S.) (IIS 1012454)National Science Foundation (U.S.) (IIS 1212928)National Science Foundation (U.S.) (IIS 1011919)National Institutes of Health (U.S.) (R01- EY019262-02)National Institutes of Health (U.S.) (R21-EY019741-02
์ก์ ์ ์ด์ฉํ ์ํ ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ๊ฐ ํ์ฌ๋ DFD ๊ธฐ๋ฐ์ ๊ฑฐ๋ฆฌ ์ธก์ ์ผ์ ์์คํ
ํ์๋
ผ๋ฌธ (๋ฐ์ฌ) -- ์์ธ๋ํ๊ต ๋ํ์ : ๊ณต๊ณผ๋ํ ์ ๊ธฐยท์ปดํจํฐ๊ณตํ๋ถ, 2020. 8. ์ ๊ตญ์ง.์์จ ์ฃผํ ๊ด๋ จ ๊ธฐ์ ๋ค์ด ๊ธ๊ฒฉํ ๋ฐ์ ํด ๊ฐ๊ณ ์๋ ๊ฐ์ด๋ฐ ์์จ์ฃผํ์ ๊ธฐ๋ฐ ๊ธฐ์ ์ค ํ๋์ธ ์ผ์ ๊ธฐ์ ์ ํ์ฌ ๋ผ์ด๋ค, ๋ ์ด๋, ์คํ
๋ ์ค ๋น์ , ์๊ณ ๋ฆฌ์ฆ ๊ธฐ๋ฐ์ ๋ชจ๋
ธ ๋น์ ์นด๋ฉ๋ผ ๋ฑ์ด ์ฌ์ฉ๋๊ณ ์์ผ๋ ์ด๋ฌํ ์ผ์๋ค์ ๋ถํผ๊ฐ ํฌ๊ฑฐ๋ ๊ฐ๊ฒฉ์ด ๋์ ์์ง ๋์ค์ ์ผ๋ก ๋ง์ ์ฐจ๋์ ์ ์ฉํ๊ธฐ ์ด๋ ค์ด ์ค์ ์ด๋ค. ์ด์ ๋ณธ ์ฐ๊ตฌ์์๋ ๋ธ๋๋ฐ์ค ์นด๋ฉ๋ผ์ ํฌ๊ธฐ์ ๋์ผํ ์ํ ์นด๋ฉ๋ผ์ ์๋จ์ ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ๋ฅผ ๊ฐ๋จํ ์ฝ์
ํ์ฌ ๋ถํผ๋ฅผ ์ค์ด๊ณ ๊ฐ๊ฒฉ์ ํ์ ํ ๋ฎ์ถ์ด ์์๊ณผ ๊ฑฐ๋ฆฌ ์ ๋ณด๋ฅผ ๋์์ ์ ๊ณตํ๋ ๊ฑฐ๋ฆฌ ์ผ์๋ฅผ ๊ฐ๋ฐํ์๋ค.
์ด ๊ฑฐ๋ฆฌ ์ผ์๋ f/1.8๊ณผ f/4.0์ ๊ฐ์ ๊ฐ์ง๋ ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ์ ์ด์ ๊ฑฐ๋ฆฌ 8 mm, ํ๊ฐ 45ยฐ, FHD๊ธ ํ์ง์ ๊ฐ์ง๋ ์นด๋ฉ๋ผ ๋ชจ๋๋ก ๊ตฌ์ฑ๋๋ค. ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ๊ฐ ์
๋ ฅ ์ ์์ ๋ฐ๊ฒ ๋๋ฉด ์ ์์ ๋ฐ๋ผ ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ์ ํฌ๊ธฐ๊ฐ ๋ณํํ๊ณ , ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ๊ฐ ํ์ฌ๋ ์นด๋ฉ๋ผ ๋ชจ๋์์๋ ์กฐ๋ฆฌ๊ฐ์ ํฌ๊ธฐ์ ๋ฐ๋ผ ๊ฐ์ ์ฅ๋ฉด์ ๋ํด ํผ์ฌ๊ณ ์ฌ๋๊ฐ ๋ค๋ฅธ ๋ ์ด๋ฏธ์ง๋ฅผ ์ป๊ฒ ๋๋ฉฐ, ์ด ํผ์ฌ๊ณ ์ฌ๋์ ์ฐจ์ด๋ฅผ ํตํด ๊ฑฐ๋ฆฌ ์ ๋ณด๋ฅผ ์ถ์ถํ ์ ์๊ฒ ๋๋ค. ์ด๋ ์๋ก ๋ค๋ฅธ ํฌ๊ธฐ์ ์กฐ๋ฆฌ๊ฐ๋ก ์ป์ ๋ ์ด๋ฏธ์ง ๊ฐ์ ํผ์ฌ๊ณ ์ฌ๋ ์ฐจ์ด๋ ๊ฑฐ๋ฆฌ์ ๋ฐ๋ผ ์ ํ์ ์ผ๋ก ์ฆ๊ฐํ๋ฉฐ ์ค์ ์ธก์ ์ ํตํ์ฌ ์ด๋ฅผ ํ์ธํ์๋ค. ๋ฅ๋ฌ๋ ์๊ณ ๋ฆฌ์ฆ์ ์ ์ฉํ๋ฉด ์ค์ฐจ๋ฅผ ์ค์ผ ์ ์๋๋ฐ, ๋ณธ ์ฐ๊ตฌ์์๋ ๋ํ
ํฐ ๊ธฐ๋ฐ๊ณผ ๊ฑฐ๋ฆฌ๋งต ๊ธฐ๋ฐ์ ์๊ณ ๋ฆฌ์ฆ์ ์ฌ์ฉํ์๋ค. ๋ํ
ํฐ ๊ธฐ๋ฐ์ ์๊ณ ๋ฆฌ์ฆ์ ์ ์ฉํ์์ ๊ฒฝ์ฐ, ์ฃผ๊ฐ์ ์ฐจ๋์ด ์ ์ง๋ ์ํฉ์์๋ 50 m ๊ฑฐ๋ฆฌ ๋ฒ์์์ ํ๊ท 0.826 m์ ์ค์ฐจ๊ฐ ๋ฐ์ํ์๋ค. ๊ฑฐ๋ฆฌ๋งต ๊ธฐ๋ฐ์ ๊ฒฝ์ฐ, ์ฃผ๊ฐ์ ์ดฌ์๋ 70 m ๊ฑฐ๋ฆฌ ๋ฒ์์ ์์์์ ๋ฌผ์ฒด ์์ญ์ ์ค์ฐจ๋ ์ฐจ๋์ ์ ์ง ์ํฉ์์๋ 0.619 m, ์ฃผํ ์ํฉ์์๋ 1.000 m๋ฅผ ๊ฐ์ง๋ค. ์ผ๊ฐ์ ์ฃผํ ์ค ์ดฌ์ํ ์์์ 40 m ๋ฒ์์์ ๋ฌผ์ฒด ์์ญ์ ๋ํด 5.470 m์ ์ค์ฐจ๋ฅผ ๊ฐ์ง๋ค. ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ์ ๊ตฌ๋์ ์ก์ ๋์คํ๋ ์ด ๋ฐฉ์์ ์ฌ์ฉํจ์ผ๋ก์จ 2.64 V์ ๋ฎ์ ๋์ ์ ์๊ณผ 10.59 ms์ ๋น ๋ฅธ ์๋ต ์๊ฐ์ ๊ตฌํํ์ฌ ์ ์ฒด ๊ฑฐ๋ฆฌ ์ผ์ ์์คํ
์ด ๋ฎ์ ์ ๋ ฅ์์ 30 fps๋ก ์ค์๊ฐ ๊ฑฐ๋ฆฌ ์ธก์ ์ด ๊ฐ๋ฅํ๋ค๋ ๊ฒ์ ๋ณด์๋ค.
๋ณธ ์ฐ๊ตฌ์์ ๊ฐ๋ฐํ ๊ฑฐ๋ฆฌ ์ผ์๋ ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ๋ฅผ ์ฌ์ฉํจ์ผ๋ก์จ ์นด๋ฉ๋ผ ํ ๋ ๋ง์ผ๋ก ํ ์ฅ์ ์ด๋ฏธ์ง๊ฐ ์๋ ํผ์ฌ๊ณ ์ฌ๋๊ฐ ๋ค๋ฅธ ๋ ์ฅ์ ์ด๋ฏธ์ง๋ฅผ ์ด์ฉํ์ฌ ๊ฑฐ๋ฆฌ ์ ํ๋๋ฅผ ๋์๋ค. ๋ํ 1/2.7 ์ธ์น์ ์ด๋ฏธ์ง ์ผ์๋ฅผ ๊ฐ์ง๋ ์นด๋ฉ๋ผ ์๋จ์, ๋ฐ๋์ฒด ๊ณต์ ๋ฐ ๋์คํ๋ ์ด ๊ณต์ ๊ธฐ์ ์ ์ด์ฉํ์ฌ ๊ตฌํ๋ 10 ร 10 ร 1.8 mm3 ํฌ๊ธฐ์ ์ํ ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ๋ฅผ ์ฝ์
ํจ์ผ๋ก์จ ์ ์ฒด ์ผ์ ํฌ๊ธฐ๋ฅผ ์ํํ ์์ผฐ์ผ๋ฉฐ ๊ณต์ ์ ํ๋๋ฅผ ๋์๋ค. ๊ฐ๊ฒฉ์ ์ธ ์ธก๋ฉด์์๋ ๊ธฐ์กด ๊ฑฐ๋ฆฌ ์ผ์๋ค๊ณผ ๋น๊ตํด ๋งค์ฐ ๋ฎ์์ก์ผ๋ฉฐ FHD๊ธ ์นด๋ฉ๋ผ๋ฅผ ์ฌ์ฉํ์ฌ ์์์ ํ์ง์ ๋์๋ค. ๊ฐ๋ฐํ ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ๊ฐ ํ ๋ ์ด์ด์์ ๋์ํ๊ธฐ ๋๋ฌธ์ ์ ๋ ฌ์ ์ค์ฐจ์์ ์ค๋ ๊ดํ ์์ฐจ๋ฅผ ์ค์ผ ์ ์๊ณ ๊ธฐ๊ณ์ ์ผ๋ก ์์ง์ด๋ ๋ถ๋ถ์ด ์์ด ์ ๋ขฐ์ฑ์ด ์ข์ผ๋ฉฐ ๋ถํธํ๋ ์กฐ๋ฆฌ๊ฐ, ์ปฌ๋ฌ ํํฐ๋ฅผ ์ด์ฉํ ์กฐ๋ฆฌ๊ฐ, ๊ฐ์๊ด, ์ ์ธ์ ํํฐ๋ฅผ ์ด์ฉํ ์ด์ค ์กฐ๋ฆฌ๊ฐ ๋ฑ ์กฐ๋ฆฌ๊ฐ๋ฅผ ์ด์ฉํ ๋ค๋ฅธ DFD ๋ฐฉ์๋ณด๋ค ๊ฑฐ๋ฆฌ ์ธก์ ๊ฐ๋ฅ ๋ฒ์๊ฐ ์ปค์ ์๋์ฐจ ์ฉ์ผ๋ก ์ฌ์ฉ๋ ์ ์์ผ๋ฉฐ ์ด๋ฏธ์ง์ ํ์ฒ๋ฆฌ ์์ด ์ ๋ช
ํ ์์์ ๋ฐ๋ก ์ป์ ์ ์๋ค.
๊ฑฐ๋ฆฌ ์ผ์๋ ์์จ์ฃผํ์ฐจ์ ์ ์ฉ๋์ด ์ถฉ๋ ๋ฐฉ์ง ๊ฒฝ๊ณ , ์ฌ๊ฐ ์ง๋ ๊ฒ์ถ, ๋ณดํ์ ๊ฒ์ถ ๋ฐ ๊ฑฐ๋ฆฌ ์ธ์, ์ฃผ์ฐจ ๋ณด์กฐ ๋ฑ์ ๊ธฐ๋ฅ์ ํ ์ ์์ผ๋ฉฐ ๊ทธ ์ธ ๋ก๋ด, ๋๋ก , ๋ชจ๋ฐ์ผ์ฉ ์นด๋ฉ๋ผ, ๊ฒ์ ์ฐ์
, ์ฌ๋ฌผ์ธํฐ๋ท ๋ฑ๊ณผ ๊ฐ์ด ์ํ ์นด๋ฉ๋ผ๊ฐ ์ฝ์
๋์ด ๊ฑฐ๋ฆฌ ์ธก์ ์ ํ์๋ก ํ ์ฌ๋ฌ ์์ฉ ๋ถ์ผ์์ ์ฌ์ฉ๋ ์ ์๋ค.Recently, autonomous car is rapidly developing with the distance sensing technology. There is LIDAR, radar, stereo vision, and algorithm-based monovision cameras, but these sensors are bulk or expensive. Thats why these sensors are not yet popularly used in many vehicles for autonomous car. To overcome these major problems, this study provides the image and distance information at the same time by implementing the distance sensor by simply inserting a tunable aperture in front of the same size camera as dash cam.
The distance sensor of this study consists of the tunable aperture with f/1.8 and f/4.0 and the camera module with focal length of 8 mm, field of view of 45ยฐ, and FHD resolution. When a driving voltage is applied to the tunable aperture, the tunable aperture changes according to the voltage. The camera module assembled with the tunable aperture can obtain two images with two different depth of field. Depth of field difference between two images increases linearly with distance, and this is confirmed through simulation and experiment. The distance information can be extracted through the difference in the depth of field of images. Additionally, the deep learning algorithm such as detector algorithm and depth map algorithm can increase the accuracy of the distance. When the detector algorithm was applied, the average error is 0.826 m in the 50 m range when the vehicle was stopped during the day. In the case of depth map algorithm, the error of the object area in the 70 m range during the day is 0.619 m in the stationary situation and 1.000 m in the driving situation. The image taken at night has an error of 5.470 m for the object area in the 40 m range. The distance sensor system can measure the distances in real time of 30 fps at low power by tunable aperture based on LCD method for low operating voltage of 2.64 V and fast response time of 10.59 ms,
The distance sensor improves the distance accuracy by using two apertures instead of just one aperture in a single camera. This sensor has the same size as a dashboard camera with a 1/2.7 inch image sensor by using small variable aperture of 10 ร 10 ร 1.8 mm3 by semiconductor fabrication and display fabrication, which is realized to reduce the overall distance sensor size and improve fabrication accuracy. Also, the price is much lower than existing distance sensors, and the FHD camera is used to improve image quality. Since the tunable aperture operates in one layer, it can reduce optical aberration resulting from misalignment. The sensor could be highly reliable due to no moving mechanical parts. Unlike the distance sensors using other apertures such as coded aperture, aperture using color filter, and dual aperture using visible and infrared filter, clear image is obtained without recovery process.
The distance sensor is applied to autonomous vehicles for collision avoidance warning, blind spot detection, pedestrian detection, and parking assistance. It is also suitable to the other applications such as robots, drones, mobile cameras, gaming industry and the Internet of Things.์ 1 ์ฅ ์ ๋ก 1
์ 1 ์ ์ฐ๊ตฌ์ ๋ฐฐ๊ฒฝ 1
์ 2 ์ ์ ํ ์ฐ๊ตฌ 4
1.2.1 ๊ฑฐ๋ฆฌ ์ธก์ ๋ฐฉ์์ ์ข
๋ฅ 4
1.2.2 ๋ค์ํ ๋ฐฉ์์ ์ํ ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ 12
์ 3 ์ ์ฐ๊ตฌ์ ๋ชฉ์ 19
์ 2 ์ฅ ์ก์ ๋์คํ๋ ์ด ๋ฐฉ์์ ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ 23
์ 1 ์ ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ์ ๋์ ์๋ฆฌ 23
์ 2 ์ ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ ์ค๊ณ 27
์ 3 ์ ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ ์ ์ 29
์ 4 ์ ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ ์ ์ 33
์ 3 ์ฅ ๊ฑฐ๋ฆฌ ์ธก์ ์์คํ
40
์ 1 ์ ๊ฑฐ๋ฆฌ ์ธก์ ์ ์๋ฆฌ 40
์ 2 ์ ๊ดํ ์์คํ
์ ํ๋ผ๋ฏธํฐ ์ ์ 42
์ 3 ์ ์๋ฎฌ๋ ์ด์
์ ํตํ ๋ธ๋ฌ ์์ธก 48
์ 4 ์ ๊ดํ๊ณ ๊ฐ๋ฐ 52
์ 5 ์ ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ์ ๋ ์ฆ์ ๋จ์ผ ๊ธฐํ ์ง์ ํ 57
3.5.1 ์จ์ดํผ ๋ ๋ฒจ์ ๋ ์ฆ ์ฐ๊ตฌ ๋ํฅ 57
3.5.2 ์จ์ดํผ ๋ ๋ฒจ์ ์ค๋ชฉ๋ ์ฆ ์ค๊ณ ๋ฐ ์คํ 62
3.5.3 ์ง์ ํ ๊ณต์ ๋ฐ ๊ฒฐ๊ณผ 67
์ 6 ์ ์ด์
๋ธ๋ฆฌ 77
์ 4 ์ฅ ๊ฑฐ๋ฆฌ ์ธก์ ์คํ 84
์ 1 ์ ์คํ ํ๊ฒฝ ๊ตฌ์ถ 84
์ 2 ์ ์์ ํ๋ 87
์ 3 ์ ๊ฒฐ๊ณผ ๋ฐ ๋ถ์ 91
4.3.1 DFD ์๊ณ ๋ฆฌ์ฆ์ ์ด์ฉํ ๊ฒฐ๊ณผ ๋ถ์ 91
4.3.2 ๋ฅ๋ฌ๋์ ์ด์ฉํ ๊ฒฐ๊ณผ ๋ถ์ 93
์ 5 ์ฅ ๊ฒฐ ๋ก 106
์ฐธ๊ณ ๋ฌธํ 109
Abstract 125Docto
Visual Quality Assessment and Blur Detection Based on the Transform of Gradient Magnitudes
abstract: Digital imaging and image processing technologies have revolutionized the way in which
we capture, store, receive, view, utilize, and share images. In image-based applications,
through different processing stages (e.g., acquisition, compression, and transmission), images
are subjected to different types of distortions which degrade their visual quality. Image
Quality Assessment (IQA) attempts to use computational models to automatically evaluate
and estimate the image quality in accordance with subjective evaluations. Moreover, with
the fast development of computer vision techniques, it is important in practice to extract
and understand the information contained in blurred images or regions.
The work in this dissertation focuses on reduced-reference visual quality assessment of
images and textures, as well as perceptual-based spatially-varying blur detection.
A training-free low-cost Reduced-Reference IQA (RRIQA) method is proposed. The
proposed method requires a very small number of reduced-reference (RR) features. Extensive
experiments performed on different benchmark databases demonstrate that the proposed
RRIQA method, delivers highly competitive performance as compared with the
state-of-the-art RRIQA models for both natural and texture images.
In the context of texture, the effect of texture granularity on the quality of synthesized
textures is studied. Moreover, two RR objective visual quality assessment methods that
quantify the perceived quality of synthesized textures are proposed. Performance evaluations
on two synthesized texture databases demonstrate that the proposed RR metrics outperforms
full-reference (FR), no-reference (NR), and RR state-of-the-art quality metrics in
predicting the perceived visual quality of the synthesized textures.
Last but not least, an effective approach to address the spatially-varying blur detection
problem from a single image without requiring any knowledge about the blur type, level,
or camera settings is proposed. The evaluations of the proposed approach on a diverse
sets of blurry images with different blur types, levels, and content demonstrate that the
proposed algorithm performs favorably against the state-of-the-art methods qualitatively
and quantitatively.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201