11 research outputs found

    On the appearance of translucent edges

    Get PDF
    Edges in images of translucent objects are very different from edges in images of opaque objects. The physical causes for these differences are hard to characterize analytically and are not well understood. This paper considers one class of translucency edges - those caused by a discontinuity in surface orientation - and describes the physical causes of their appearance. We simulate thousands of translucency edge profiles using many different scattering material parameters, and we explain the resulting variety of edge patterns by qualitatively analyzing light transport. We also discuss the existence of shape and material metamers, or combinations of distinct shape or material parameters that generate the same edge profile. This knowledge is relevant to visual inference tasks that involve translucent objects, such as shape or material estimation.National Science Foundation (U.S.) (IIS 1161564)National Science Foundation (U.S.) (IIS 1012454)National Science Foundation (U.S.) (IIS 1212928)National Science Foundation (U.S.) (IIS 1011919)National Institutes of Health (U.S.) (R01- EY019262-02)National Institutes of Health (U.S.) (R21-EY019741-02

    ์•ก์ •์„ ์ด์šฉํ•œ ์†Œํ˜• ๊ฐ€๋ณ€์กฐ๋ฆฌ๊ฐœ๊ฐ€ ํƒ‘์žฌ๋œ DFD ๊ธฐ๋ฐ˜์˜ ๊ฑฐ๋ฆฌ ์ธก์ • ์„ผ์„œ ์‹œ์Šคํ…œ

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ (๋ฐ•์‚ฌ) -- ์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› : ๊ณต๊ณผ๋Œ€ํ•™ ์ „๊ธฐยท์ปดํ“จํ„ฐ๊ณตํ•™๋ถ€, 2020. 8. ์ „๊ตญ์ง„.์ž์œจ ์ฃผํ–‰ ๊ด€๋ จ ๊ธฐ์ˆ ๋“ค์ด ๊ธ‰๊ฒฉํžˆ ๋ฐœ์ „ํ•ด ๊ฐ€๊ณ  ์žˆ๋Š” ๊ฐ€์šด๋ฐ ์ž์œจ์ฃผํ–‰์˜ ๊ธฐ๋ฐ˜ ๊ธฐ์ˆ  ์ค‘ ํ•˜๋‚˜์ธ ์„ผ์„œ ๊ธฐ์ˆ ์€ ํ˜„์žฌ ๋ผ์ด๋‹ค, ๋ ˆ์ด๋”, ์Šคํ…Œ๋ ˆ์˜ค ๋น„์ „, ์•Œ๊ณ ๋ฆฌ์ฆ˜ ๊ธฐ๋ฐ˜์˜ ๋ชจ๋…ธ ๋น„์ „ ์นด๋ฉ”๋ผ ๋“ฑ์ด ์‚ฌ์šฉ๋˜๊ณ  ์žˆ์œผ๋‚˜ ์ด๋Ÿฌํ•œ ์„ผ์„œ๋“ค์€ ๋ถ€ํ”ผ๊ฐ€ ํฌ๊ฑฐ๋‚˜ ๊ฐ€๊ฒฉ์ด ๋†’์•„ ์•„์ง ๋Œ€์ค‘์ ์œผ๋กœ ๋งŽ์€ ์ฐจ๋Ÿ‰์— ์ ์šฉํ•˜๊ธฐ ์–ด๋ ค์šด ์‹ค์ •์ด๋‹ค. ์ด์— ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ๋ธ”๋ž™๋ฐ•์Šค ์นด๋ฉ”๋ผ์˜ ํฌ๊ธฐ์™€ ๋™์ผํ•œ ์†Œํ˜• ์นด๋ฉ”๋ผ์˜ ์•ž๋‹จ์— ๊ฐ€๋ณ€์กฐ๋ฆฌ๊ฐœ๋ฅผ ๊ฐ„๋‹จํžˆ ์‚ฝ์ž…ํ•˜์—ฌ ๋ถ€ํ”ผ๋ฅผ ์ค„์ด๊ณ  ๊ฐ€๊ฒฉ์„ ํ˜„์ €ํžˆ ๋‚ฎ์ถ”์–ด ์˜์ƒ๊ณผ ๊ฑฐ๋ฆฌ ์ •๋ณด๋ฅผ ๋™์‹œ์— ์ œ๊ณตํ•˜๋Š” ๊ฑฐ๋ฆฌ ์„ผ์„œ๋ฅผ ๊ฐœ๋ฐœํ•˜์˜€๋‹ค. ์ด ๊ฑฐ๋ฆฌ ์„ผ์„œ๋Š” f/1.8๊ณผ f/4.0์˜ ๊ฐ’์„ ๊ฐ€์ง€๋Š” ๊ฐ€๋ณ€์กฐ๋ฆฌ๊ฐœ์™€ ์ดˆ์  ๊ฑฐ๋ฆฌ 8 mm, ํ™”๊ฐ 45ยฐ, FHD๊ธ‰ ํ™”์งˆ์„ ๊ฐ€์ง€๋Š” ์นด๋ฉ”๋ผ ๋ชจ๋“ˆ๋กœ ๊ตฌ์„ฑ๋œ๋‹ค. ๊ฐ€๋ณ€์กฐ๋ฆฌ๊ฐœ๊ฐ€ ์ž…๋ ฅ ์ „์••์„ ๋ฐ›๊ฒŒ ๋˜๋ฉด ์ „์••์— ๋”ฐ๋ผ ๊ฐ€๋ณ€์กฐ๋ฆฌ๊ฐœ์˜ ํฌ๊ธฐ๊ฐ€ ๋ณ€ํ™”ํ•˜๊ณ , ๊ฐ€๋ณ€์กฐ๋ฆฌ๊ฐœ๊ฐ€ ํƒ‘์žฌ๋œ ์นด๋ฉ”๋ผ ๋ชจ๋“ˆ์—์„œ๋Š” ์กฐ๋ฆฌ๊ฐœ์˜ ํฌ๊ธฐ์— ๋”ฐ๋ผ ๊ฐ™์€ ์žฅ๋ฉด์— ๋Œ€ํ•ด ํ”ผ์‚ฌ๊ณ„ ์‹ฌ๋„๊ฐ€ ๋‹ค๋ฅธ ๋‘ ์ด๋ฏธ์ง€๋ฅผ ์–ป๊ฒŒ ๋˜๋ฉฐ, ์ด ํ”ผ์‚ฌ๊ณ„ ์‹ฌ๋„์˜ ์ฐจ์ด๋ฅผ ํ†ตํ•ด ๊ฑฐ๋ฆฌ ์ •๋ณด๋ฅผ ์ถ”์ถœํ•  ์ˆ˜ ์žˆ๊ฒŒ ๋œ๋‹ค. ์ด๋•Œ ์„œ๋กœ ๋‹ค๋ฅธ ํฌ๊ธฐ์˜ ์กฐ๋ฆฌ๊ฐœ๋กœ ์–ป์€ ๋‘ ์ด๋ฏธ์ง€ ๊ฐ„์˜ ํ”ผ์‚ฌ๊ณ„ ์‹ฌ๋„ ์ฐจ์ด๋Š” ๊ฑฐ๋ฆฌ์— ๋”ฐ๋ผ ์„ ํ˜•์ ์œผ๋กœ ์ฆ๊ฐ€ํ•˜๋ฉฐ ์‹ค์ œ ์ธก์ •์„ ํ†ตํ•˜์—ฌ ์ด๋ฅผ ํ™•์ธํ•˜์˜€๋‹ค. ๋”ฅ๋Ÿฌ๋‹ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ ์šฉํ•˜๋ฉด ์˜ค์ฐจ๋ฅผ ์ค„์ผ ์ˆ˜ ์žˆ๋Š”๋ฐ, ๋ณธ ์—ฐ๊ตฌ์—์„œ๋Š” ๋””ํ…ํ„ฐ ๊ธฐ๋ฐ˜๊ณผ ๊ฑฐ๋ฆฌ๋งต ๊ธฐ๋ฐ˜์˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์‚ฌ์šฉํ•˜์˜€๋‹ค. ๋””ํ…ํ„ฐ ๊ธฐ๋ฐ˜์˜ ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ ์šฉํ•˜์˜€์„ ๊ฒฝ์šฐ, ์ฃผ๊ฐ„์— ์ฐจ๋Ÿ‰์ด ์ •์ง€๋œ ์ƒํ™ฉ์—์„œ๋Š” 50 m ๊ฑฐ๋ฆฌ ๋ฒ”์œ„์—์„œ ํ‰๊ท  0.826 m์˜ ์˜ค์ฐจ๊ฐ€ ๋ฐœ์ƒํ•˜์˜€๋‹ค. ๊ฑฐ๋ฆฌ๋งต ๊ธฐ๋ฐ˜์˜ ๊ฒฝ์šฐ, ์ฃผ๊ฐ„์— ์ดฌ์˜๋œ 70 m ๊ฑฐ๋ฆฌ ๋ฒ”์œ„์˜ ์˜์ƒ์—์„œ ๋ฌผ์ฒด ์˜์—ญ์˜ ์˜ค์ฐจ๋Š” ์ฐจ๋Ÿ‰์˜ ์ •์ง€ ์ƒํ™ฉ์—์„œ๋Š” 0.619 m, ์ฃผํ–‰ ์ƒํ™ฉ์—์„œ๋Š” 1.000 m๋ฅผ ๊ฐ€์ง„๋‹ค. ์•ผ๊ฐ„์— ์ฃผํ–‰ ์ค‘ ์ดฌ์˜ํ•œ ์˜์ƒ์€ 40 m ๋ฒ”์œ„์—์„œ ๋ฌผ์ฒด ์˜์—ญ์— ๋Œ€ํ•ด 5.470 m์˜ ์˜ค์ฐจ๋ฅผ ๊ฐ€์ง„๋‹ค. ๊ฐ€๋ณ€์กฐ๋ฆฌ๊ฐœ์˜ ๊ตฌ๋™์€ ์•ก์ • ๋””์Šคํ”Œ๋ ˆ์ด ๋ฐฉ์‹์„ ์‚ฌ์šฉํ•จ์œผ๋กœ์จ 2.64 V์˜ ๋‚ฎ์€ ๋™์ž‘ ์ „์••๊ณผ 10.59 ms์˜ ๋น ๋ฅธ ์‘๋‹ต ์‹œ๊ฐ„์„ ๊ตฌํ˜„ํ•˜์—ฌ ์ „์ฒด ๊ฑฐ๋ฆฌ ์„ผ์„œ ์‹œ์Šคํ…œ์ด ๋‚ฎ์€ ์ „๋ ฅ์—์„œ 30 fps๋กœ ์‹ค์‹œ๊ฐ„ ๊ฑฐ๋ฆฌ ์ธก์ •์ด ๊ฐ€๋Šฅํ•˜๋‹ค๋Š” ๊ฒƒ์„ ๋ณด์˜€๋‹ค. ๋ณธ ์—ฐ๊ตฌ์—์„œ ๊ฐœ๋ฐœํ•œ ๊ฑฐ๋ฆฌ ์„ผ์„œ๋Š” ๊ฐ€๋ณ€์กฐ๋ฆฌ๊ฐœ๋ฅผ ์‚ฌ์šฉํ•จ์œผ๋กœ์จ ์นด๋ฉ”๋ผ ํ•œ ๋Œ€ ๋งŒ์œผ๋กœ ํ•œ ์žฅ์˜ ์ด๋ฏธ์ง€๊ฐ€ ์•„๋‹Œ ํ”ผ์‚ฌ๊ณ„ ์‹ฌ๋„๊ฐ€ ๋‹ค๋ฅธ ๋‘ ์žฅ์˜ ์ด๋ฏธ์ง€๋ฅผ ์ด์šฉํ•˜์—ฌ ๊ฑฐ๋ฆฌ ์ •ํ™•๋„๋ฅผ ๋†’์˜€๋‹ค. ๋˜ํ•œ 1/2.7 ์ธ์น˜์˜ ์ด๋ฏธ์ง€ ์„ผ์„œ๋ฅผ ๊ฐ€์ง€๋Š” ์นด๋ฉ”๋ผ ์•ž๋‹จ์—, ๋ฐ˜๋„์ฒด ๊ณต์ • ๋ฐ ๋””์Šคํ”Œ๋ ˆ์ด ๊ณต์ • ๊ธฐ์ˆ ์„ ์ด์šฉํ•˜์—ฌ ๊ตฌํ˜„๋œ 10 ร— 10 ร— 1.8 mm3 ํฌ๊ธฐ์˜ ์†Œํ˜• ๊ฐ€๋ณ€์กฐ๋ฆฌ๊ฐœ๋ฅผ ์‚ฝ์ž…ํ•จ์œผ๋กœ์จ ์ „์ฒด ์„ผ์„œ ํฌ๊ธฐ๋ฅผ ์†Œํ˜•ํ™” ์‹œ์ผฐ์œผ๋ฉฐ ๊ณต์ • ์ •ํ™•๋„๋ฅผ ๋†’์˜€๋‹ค. ๊ฐ€๊ฒฉ์ ์ธ ์ธก๋ฉด์—์„œ๋„ ๊ธฐ์กด ๊ฑฐ๋ฆฌ ์„ผ์„œ๋“ค๊ณผ ๋น„๊ตํ•ด ๋งค์šฐ ๋‚ฎ์•„์กŒ์œผ๋ฉฐ FHD๊ธ‰ ์นด๋ฉ”๋ผ๋ฅผ ์‚ฌ์šฉํ•˜์—ฌ ์˜์ƒ์˜ ํ™”์งˆ์„ ๋†’์˜€๋‹ค. ๊ฐœ๋ฐœํ•œ ๊ฐ€๋ณ€์กฐ๋ฆฌ๊ฐœ๊ฐ€ ํ•œ ๋ ˆ์ด์–ด์—์„œ ๋™์ž‘ํ•˜๊ธฐ ๋•Œ๋ฌธ์— ์ •๋ ฌ์˜ ์˜ค์ฐจ์—์„œ ์˜ค๋Š” ๊ด‘ํ•™ ์ˆ˜์ฐจ๋ฅผ ์ค„์ผ ์ˆ˜ ์žˆ๊ณ  ๊ธฐ๊ณ„์ ์œผ๋กœ ์›€์ง์ด๋Š” ๋ถ€๋ถ„์ด ์—†์–ด ์‹ ๋ขฐ์„ฑ์ด ์ข‹์œผ๋ฉฐ ๋ถ€ํ˜ธํ™”๋œ ์กฐ๋ฆฌ๊ฐœ, ์ปฌ๋Ÿฌ ํ•„ํ„ฐ๋ฅผ ์ด์šฉํ•œ ์กฐ๋ฆฌ๊ฐœ, ๊ฐ€์‹œ๊ด‘, ์ ์™ธ์„  ํ•„ํ„ฐ๋ฅผ ์ด์šฉํ•œ ์ด์ค‘ ์กฐ๋ฆฌ๊ฐœ ๋“ฑ ์กฐ๋ฆฌ๊ฐœ๋ฅผ ์ด์šฉํ•œ ๋‹ค๋ฅธ DFD ๋ฐฉ์‹๋ณด๋‹ค ๊ฑฐ๋ฆฌ ์ธก์ • ๊ฐ€๋Šฅ ๋ฒ”์œ„๊ฐ€ ์ปค์„œ ์ž๋™์ฐจ ์šฉ์œผ๋กœ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ์œผ๋ฉฐ ์ด๋ฏธ์ง€์˜ ํ›„์ฒ˜๋ฆฌ ์—†์ด ์„ ๋ช…ํ•œ ์˜์ƒ์„ ๋ฐ”๋กœ ์–ป์„ ์ˆ˜ ์žˆ๋‹ค. ๊ฑฐ๋ฆฌ ์„ผ์„œ๋Š” ์ž์œจ์ฃผํ–‰์ฐจ์— ์ ์šฉ๋˜์–ด ์ถฉ๋Œ ๋ฐฉ์ง€ ๊ฒฝ๊ณ , ์‚ฌ๊ฐ ์ง€๋Œ€ ๊ฒ€์ถœ, ๋ณดํ–‰์ž ๊ฒ€์ถœ ๋ฐ ๊ฑฐ๋ฆฌ ์ธ์‹, ์ฃผ์ฐจ ๋ณด์กฐ ๋“ฑ์˜ ๊ธฐ๋Šฅ์„ ํ•  ์ˆ˜ ์žˆ์œผ๋ฉฐ ๊ทธ ์™ธ ๋กœ๋ด‡, ๋“œ๋ก , ๋ชจ๋ฐ”์ผ์šฉ ์นด๋ฉ”๋ผ, ๊ฒŒ์ž„ ์‚ฐ์—…, ์‚ฌ๋ฌผ์ธํ„ฐ๋„ท ๋“ฑ๊ณผ ๊ฐ™์ด ์†Œํ˜• ์นด๋ฉ”๋ผ๊ฐ€ ์‚ฝ์ž…๋˜์–ด ๊ฑฐ๋ฆฌ ์ธก์ •์„ ํ•„์š”๋กœ ํ•œ ์—ฌ๋Ÿฌ ์‘์šฉ ๋ถ„์•ผ์—์„œ ์‚ฌ์šฉ๋  ์ˆ˜ ์žˆ๋‹ค.Recently, autonomous car is rapidly developing with the distance sensing technology. There is LIDAR, radar, stereo vision, and algorithm-based monovision cameras, but these sensors are bulk or expensive. Thats why these sensors are not yet popularly used in many vehicles for autonomous car. To overcome these major problems, this study provides the image and distance information at the same time by implementing the distance sensor by simply inserting a tunable aperture in front of the same size camera as dash cam. The distance sensor of this study consists of the tunable aperture with f/1.8 and f/4.0 and the camera module with focal length of 8 mm, field of view of 45ยฐ, and FHD resolution. When a driving voltage is applied to the tunable aperture, the tunable aperture changes according to the voltage. The camera module assembled with the tunable aperture can obtain two images with two different depth of field. Depth of field difference between two images increases linearly with distance, and this is confirmed through simulation and experiment. The distance information can be extracted through the difference in the depth of field of images. Additionally, the deep learning algorithm such as detector algorithm and depth map algorithm can increase the accuracy of the distance. When the detector algorithm was applied, the average error is 0.826 m in the 50 m range when the vehicle was stopped during the day. In the case of depth map algorithm, the error of the object area in the 70 m range during the day is 0.619 m in the stationary situation and 1.000 m in the driving situation. The image taken at night has an error of 5.470 m for the object area in the 40 m range. The distance sensor system can measure the distances in real time of 30 fps at low power by tunable aperture based on LCD method for low operating voltage of 2.64 V and fast response time of 10.59 ms, The distance sensor improves the distance accuracy by using two apertures instead of just one aperture in a single camera. This sensor has the same size as a dashboard camera with a 1/2.7 inch image sensor by using small variable aperture of 10 ร— 10 ร— 1.8 mm3 by semiconductor fabrication and display fabrication, which is realized to reduce the overall distance sensor size and improve fabrication accuracy. Also, the price is much lower than existing distance sensors, and the FHD camera is used to improve image quality. Since the tunable aperture operates in one layer, it can reduce optical aberration resulting from misalignment. The sensor could be highly reliable due to no moving mechanical parts. Unlike the distance sensors using other apertures such as coded aperture, aperture using color filter, and dual aperture using visible and infrared filter, clear image is obtained without recovery process. The distance sensor is applied to autonomous vehicles for collision avoidance warning, blind spot detection, pedestrian detection, and parking assistance. It is also suitable to the other applications such as robots, drones, mobile cameras, gaming industry and the Internet of Things.์ œ 1 ์žฅ ์„œ ๋ก  1 ์ œ 1 ์ ˆ ์—ฐ๊ตฌ์˜ ๋ฐฐ๊ฒฝ 1 ์ œ 2 ์ ˆ ์„ ํ–‰ ์—ฐ๊ตฌ 4 1.2.1 ๊ฑฐ๋ฆฌ ์ธก์ • ๋ฐฉ์‹์˜ ์ข…๋ฅ˜ 4 1.2.2 ๋‹ค์–‘ํ•œ ๋ฐฉ์‹์˜ ์†Œํ˜• ๊ฐ€๋ณ€์กฐ๋ฆฌ๊ฐœ 12 ์ œ 3 ์ ˆ ์—ฐ๊ตฌ์˜ ๋ชฉ์  19 ์ œ 2 ์žฅ ์•ก์ • ๋””์Šคํ”Œ๋ ˆ์ด ๋ฐฉ์‹์˜ ๊ฐ€๋ณ€์กฐ๋ฆฌ๊ฐœ 23 ์ œ 1 ์ ˆ ๊ฐ€๋ณ€์กฐ๋ฆฌ๊ฐœ์˜ ๋™์ž‘ ์›๋ฆฌ 23 ์ œ 2 ์ ˆ ๊ฐ€๋ณ€์กฐ๋ฆฌ๊ฐœ ์„ค๊ณ„ 27 ์ œ 3 ์ ˆ ๊ฐ€๋ณ€์กฐ๋ฆฌ๊ฐœ ์ œ์ž‘ 29 ์ œ 4 ์ ˆ ๊ฐ€๋ณ€์กฐ๋ฆฌ๊ฐœ ์ œ์ž‘ 33 ์ œ 3 ์žฅ ๊ฑฐ๋ฆฌ ์ธก์ • ์‹œ์Šคํ…œ 40 ์ œ 1 ์ ˆ ๊ฑฐ๋ฆฌ ์ธก์ •์˜ ์›๋ฆฌ 40 ์ œ 2 ์ ˆ ๊ด‘ํ•™ ์‹œ์Šคํ…œ์˜ ํŒŒ๋ผ๋ฏธํ„ฐ ์„ ์ • 42 ์ œ 3 ์ ˆ ์‹œ๋ฎฌ๋ ˆ์ด์…˜์„ ํ†ตํ•œ ๋ธ”๋Ÿฌ ์˜ˆ์ธก 48 ์ œ 4 ์ ˆ ๊ด‘ํ•™๊ณ„ ๊ฐœ๋ฐœ 52 ์ œ 5 ์ ˆ ๊ฐ€๋ณ€์กฐ๋ฆฌ๊ฐœ์™€ ๋ Œ์ฆˆ์˜ ๋‹จ์ผ ๊ธฐํŒ ์ง‘์ ํ™” 57 3.5.1 ์›จ์ดํผ ๋ ˆ๋ฒจ์˜ ๋ Œ์ฆˆ ์—ฐ๊ตฌ ๋™ํ–ฅ 57 3.5.2 ์›จ์ดํผ ๋ ˆ๋ฒจ์˜ ์˜ค๋ชฉ๋ Œ์ฆˆ ์„ค๊ณ„ ๋ฐ ์‹คํ—˜ 62 3.5.3 ์ง‘์ ํ™” ๊ณต์ • ๋ฐ ๊ฒฐ๊ณผ 67 ์ œ 6 ์ ˆ ์–ด์…ˆ๋ธ”๋ฆฌ 77 ์ œ 4 ์žฅ ๊ฑฐ๋ฆฌ ์ธก์ • ์‹คํ—˜ 84 ์ œ 1 ์ ˆ ์‹คํ—˜ ํ™˜๊ฒฝ ๊ตฌ์ถ• 84 ์ œ 2 ์ ˆ ์˜์ƒ ํš๋“ 87 ์ œ 3 ์ ˆ ๊ฒฐ๊ณผ ๋ฐ ๋ถ„์„ 91 4.3.1 DFD ์•Œ๊ณ ๋ฆฌ์ฆ˜์„ ์ด์šฉํ•œ ๊ฒฐ๊ณผ ๋ถ„์„ 91 4.3.2 ๋”ฅ๋Ÿฌ๋‹์„ ์ด์šฉํ•œ ๊ฒฐ๊ณผ ๋ถ„์„ 93 ์ œ 5 ์žฅ ๊ฒฐ ๋ก  106 ์ฐธ๊ณ  ๋ฌธํ—Œ 109 Abstract 125Docto

    Visual Quality Assessment and Blur Detection Based on the Transform of Gradient Magnitudes

    Get PDF
    abstract: Digital imaging and image processing technologies have revolutionized the way in which we capture, store, receive, view, utilize, and share images. In image-based applications, through different processing stages (e.g., acquisition, compression, and transmission), images are subjected to different types of distortions which degrade their visual quality. Image Quality Assessment (IQA) attempts to use computational models to automatically evaluate and estimate the image quality in accordance with subjective evaluations. Moreover, with the fast development of computer vision techniques, it is important in practice to extract and understand the information contained in blurred images or regions. The work in this dissertation focuses on reduced-reference visual quality assessment of images and textures, as well as perceptual-based spatially-varying blur detection. A training-free low-cost Reduced-Reference IQA (RRIQA) method is proposed. The proposed method requires a very small number of reduced-reference (RR) features. Extensive experiments performed on different benchmark databases demonstrate that the proposed RRIQA method, delivers highly competitive performance as compared with the state-of-the-art RRIQA models for both natural and texture images. In the context of texture, the effect of texture granularity on the quality of synthesized textures is studied. Moreover, two RR objective visual quality assessment methods that quantify the perceived quality of synthesized textures are proposed. Performance evaluations on two synthesized texture databases demonstrate that the proposed RR metrics outperforms full-reference (FR), no-reference (NR), and RR state-of-the-art quality metrics in predicting the perceived visual quality of the synthesized textures. Last but not least, an effective approach to address the spatially-varying blur detection problem from a single image without requiring any knowledge about the blur type, level, or camera settings is proposed. The evaluations of the proposed approach on a diverse sets of blurry images with different blur types, levels, and content demonstrate that the proposed algorithm performs favorably against the state-of-the-art methods qualitatively and quantitatively.Dissertation/ThesisDoctoral Dissertation Electrical Engineering 201
    corecore