12 research outputs found
Recommended from our members
Computational Cameras: Approaches, Benefits and Limits
A computational camera uses a combination of optics and software to produce images that cannot be taken with traditional cameras. In the last decade, computational imaging has emerged as a vibrant field of research. A wide variety of computational cameras have been demonstrated - some designed to achieve new imaging functionalities and others to reduce the complexity of traditional imaging. In this article, we describe how computational cameras have evolved and present a taxonomy for the technical approaches they use. We explore the benefits and limits of computational imaging, and describe how it is related to the adjacent and overlapping fields of digital imaging, computational photography and computational image sensors
Multi-Beam Scan Analysis with a Clinical LINAC for High Resolution Cherenkov-Excited Molecular Luminescence Imaging in Tissue.
Cherenkov-excited luminescence scanned imaging (CELSI) is achieved with external beam radiotherapy to map out molecular luminescence intensity or lifetime in tissue. Just as in fluorescence microscopy, the choice of excitation geometry can affect the imaging time, spatial resolution and contrast recovered. In this study, the use of spatially patterned illumination was systematically studied comparing scan shapes, starting with line scan and block patterns and increasing from single beams to multiple parallel beams and then to clinically used treatment plans for radiation therapy. The image recovery was improved by a spatial-temporal modulation-demodulation method, which used the ability to capture simultaneous images of the excitation Cherenkov beam shape to deconvolve the CELSI images. Experimental studies used the multi-leaf collimator on a clinical linear accelerator (LINAC) to create the scanning patterns, and image resolution and contrast recovery were tested at different depths of tissue phantom material. As hypothesized, the smallest illumination squares achieved optimal resolution, but at the cost of lower signal and slower imaging time. Having larger excitation blocks provided superior signal but at the cost of increased radiation dose and lower resolution. Increasing the scan beams to multiple block patterns improved the performance in terms of image fidelity, lower radiation dose and faster acquisition. The spatial resolution was mostly dependent upon pixel area with an optimized side length near 38mm and a beam scan pitch of P = 0.33, and the achievable imaging depth was increased from 14mm to 18mm with sufficient resolving power for 1mm sized test objects. As a proof-of-concept, in-vivo tumor mouse imaging was performed to show 3D rendering and quantification of tissue pO2 with values of 5.6mmHg in a tumor and 77mmHg in normal tissue
High-Speed Probe Card Analysis Using Real-time Machine Vision and Image Restoration Technique
There has been an increase in demand for the wafer-level test techniques that evaluates
the functionality and performance of the wafer chips before packaging them, since the
trend of integrated circuits are getting more sophisticated and smaller in size. Throughout
the wafer-level test, the semiconductor manufacturers are able to avoid the unnecessary
packing cost and to provide early feedback on the overall status of the chip fabrication
process. A probe card is a module of wafer-level tester, and can detect the defects of the
chip by evaluating the electric characteristics of the integrated circuits(IC's). A probe card
analyzer is popularly utilized to detect such a potential probe card failure which leads to
increase in the unnecessary manufacture expense in the packing process.
In this paper, a new probe card analysis strategy has been proposed. The main idea in
conducting probe card analysis is to operate the vision-based inspection on-the-
y while the
camera is continuously moving. In doing so, the position measurement from the encoder is
rstly synchronized with the image data that is captured by a controlled trigger signal under
the real-time setting. Because capturing images from a moving camera creates blurring in
the image, a simple deblurring technique has been employed to restore the original still
images from blurred ones. The main ideas are demonstrated using an experimental test
bed and a commercial probe card. The experimental test bed has been designed that
comprises a micro machine vision system and a real-time controller, the con guration of
the low cost experimental test bed is proposed. Compared to the existing stop-and-go
approach, the proposed technique can substantially enhance the inspection speed without
additional cost for major hardware change.1 yea
์ก์ ์ ์ด์ฉํ ์ํ ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ๊ฐ ํ์ฌ๋ DFD ๊ธฐ๋ฐ์ ๊ฑฐ๋ฆฌ ์ธก์ ์ผ์ ์์คํ
ํ์๋
ผ๋ฌธ (๋ฐ์ฌ) -- ์์ธ๋ํ๊ต ๋ํ์ : ๊ณต๊ณผ๋ํ ์ ๊ธฐยท์ปดํจํฐ๊ณตํ๋ถ, 2020. 8. ์ ๊ตญ์ง.์์จ ์ฃผํ ๊ด๋ จ ๊ธฐ์ ๋ค์ด ๊ธ๊ฒฉํ ๋ฐ์ ํด ๊ฐ๊ณ ์๋ ๊ฐ์ด๋ฐ ์์จ์ฃผํ์ ๊ธฐ๋ฐ ๊ธฐ์ ์ค ํ๋์ธ ์ผ์ ๊ธฐ์ ์ ํ์ฌ ๋ผ์ด๋ค, ๋ ์ด๋, ์คํ
๋ ์ค ๋น์ , ์๊ณ ๋ฆฌ์ฆ ๊ธฐ๋ฐ์ ๋ชจ๋
ธ ๋น์ ์นด๋ฉ๋ผ ๋ฑ์ด ์ฌ์ฉ๋๊ณ ์์ผ๋ ์ด๋ฌํ ์ผ์๋ค์ ๋ถํผ๊ฐ ํฌ๊ฑฐ๋ ๊ฐ๊ฒฉ์ด ๋์ ์์ง ๋์ค์ ์ผ๋ก ๋ง์ ์ฐจ๋์ ์ ์ฉํ๊ธฐ ์ด๋ ค์ด ์ค์ ์ด๋ค. ์ด์ ๋ณธ ์ฐ๊ตฌ์์๋ ๋ธ๋๋ฐ์ค ์นด๋ฉ๋ผ์ ํฌ๊ธฐ์ ๋์ผํ ์ํ ์นด๋ฉ๋ผ์ ์๋จ์ ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ๋ฅผ ๊ฐ๋จํ ์ฝ์
ํ์ฌ ๋ถํผ๋ฅผ ์ค์ด๊ณ ๊ฐ๊ฒฉ์ ํ์ ํ ๋ฎ์ถ์ด ์์๊ณผ ๊ฑฐ๋ฆฌ ์ ๋ณด๋ฅผ ๋์์ ์ ๊ณตํ๋ ๊ฑฐ๋ฆฌ ์ผ์๋ฅผ ๊ฐ๋ฐํ์๋ค.
์ด ๊ฑฐ๋ฆฌ ์ผ์๋ f/1.8๊ณผ f/4.0์ ๊ฐ์ ๊ฐ์ง๋ ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ์ ์ด์ ๊ฑฐ๋ฆฌ 8 mm, ํ๊ฐ 45ยฐ, FHD๊ธ ํ์ง์ ๊ฐ์ง๋ ์นด๋ฉ๋ผ ๋ชจ๋๋ก ๊ตฌ์ฑ๋๋ค. ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ๊ฐ ์
๋ ฅ ์ ์์ ๋ฐ๊ฒ ๋๋ฉด ์ ์์ ๋ฐ๋ผ ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ์ ํฌ๊ธฐ๊ฐ ๋ณํํ๊ณ , ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ๊ฐ ํ์ฌ๋ ์นด๋ฉ๋ผ ๋ชจ๋์์๋ ์กฐ๋ฆฌ๊ฐ์ ํฌ๊ธฐ์ ๋ฐ๋ผ ๊ฐ์ ์ฅ๋ฉด์ ๋ํด ํผ์ฌ๊ณ ์ฌ๋๊ฐ ๋ค๋ฅธ ๋ ์ด๋ฏธ์ง๋ฅผ ์ป๊ฒ ๋๋ฉฐ, ์ด ํผ์ฌ๊ณ ์ฌ๋์ ์ฐจ์ด๋ฅผ ํตํด ๊ฑฐ๋ฆฌ ์ ๋ณด๋ฅผ ์ถ์ถํ ์ ์๊ฒ ๋๋ค. ์ด๋ ์๋ก ๋ค๋ฅธ ํฌ๊ธฐ์ ์กฐ๋ฆฌ๊ฐ๋ก ์ป์ ๋ ์ด๋ฏธ์ง ๊ฐ์ ํผ์ฌ๊ณ ์ฌ๋ ์ฐจ์ด๋ ๊ฑฐ๋ฆฌ์ ๋ฐ๋ผ ์ ํ์ ์ผ๋ก ์ฆ๊ฐํ๋ฉฐ ์ค์ ์ธก์ ์ ํตํ์ฌ ์ด๋ฅผ ํ์ธํ์๋ค. ๋ฅ๋ฌ๋ ์๊ณ ๋ฆฌ์ฆ์ ์ ์ฉํ๋ฉด ์ค์ฐจ๋ฅผ ์ค์ผ ์ ์๋๋ฐ, ๋ณธ ์ฐ๊ตฌ์์๋ ๋ํ
ํฐ ๊ธฐ๋ฐ๊ณผ ๊ฑฐ๋ฆฌ๋งต ๊ธฐ๋ฐ์ ์๊ณ ๋ฆฌ์ฆ์ ์ฌ์ฉํ์๋ค. ๋ํ
ํฐ ๊ธฐ๋ฐ์ ์๊ณ ๋ฆฌ์ฆ์ ์ ์ฉํ์์ ๊ฒฝ์ฐ, ์ฃผ๊ฐ์ ์ฐจ๋์ด ์ ์ง๋ ์ํฉ์์๋ 50 m ๊ฑฐ๋ฆฌ ๋ฒ์์์ ํ๊ท 0.826 m์ ์ค์ฐจ๊ฐ ๋ฐ์ํ์๋ค. ๊ฑฐ๋ฆฌ๋งต ๊ธฐ๋ฐ์ ๊ฒฝ์ฐ, ์ฃผ๊ฐ์ ์ดฌ์๋ 70 m ๊ฑฐ๋ฆฌ ๋ฒ์์ ์์์์ ๋ฌผ์ฒด ์์ญ์ ์ค์ฐจ๋ ์ฐจ๋์ ์ ์ง ์ํฉ์์๋ 0.619 m, ์ฃผํ ์ํฉ์์๋ 1.000 m๋ฅผ ๊ฐ์ง๋ค. ์ผ๊ฐ์ ์ฃผํ ์ค ์ดฌ์ํ ์์์ 40 m ๋ฒ์์์ ๋ฌผ์ฒด ์์ญ์ ๋ํด 5.470 m์ ์ค์ฐจ๋ฅผ ๊ฐ์ง๋ค. ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ์ ๊ตฌ๋์ ์ก์ ๋์คํ๋ ์ด ๋ฐฉ์์ ์ฌ์ฉํจ์ผ๋ก์จ 2.64 V์ ๋ฎ์ ๋์ ์ ์๊ณผ 10.59 ms์ ๋น ๋ฅธ ์๋ต ์๊ฐ์ ๊ตฌํํ์ฌ ์ ์ฒด ๊ฑฐ๋ฆฌ ์ผ์ ์์คํ
์ด ๋ฎ์ ์ ๋ ฅ์์ 30 fps๋ก ์ค์๊ฐ ๊ฑฐ๋ฆฌ ์ธก์ ์ด ๊ฐ๋ฅํ๋ค๋ ๊ฒ์ ๋ณด์๋ค.
๋ณธ ์ฐ๊ตฌ์์ ๊ฐ๋ฐํ ๊ฑฐ๋ฆฌ ์ผ์๋ ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ๋ฅผ ์ฌ์ฉํจ์ผ๋ก์จ ์นด๋ฉ๋ผ ํ ๋ ๋ง์ผ๋ก ํ ์ฅ์ ์ด๋ฏธ์ง๊ฐ ์๋ ํผ์ฌ๊ณ ์ฌ๋๊ฐ ๋ค๋ฅธ ๋ ์ฅ์ ์ด๋ฏธ์ง๋ฅผ ์ด์ฉํ์ฌ ๊ฑฐ๋ฆฌ ์ ํ๋๋ฅผ ๋์๋ค. ๋ํ 1/2.7 ์ธ์น์ ์ด๋ฏธ์ง ์ผ์๋ฅผ ๊ฐ์ง๋ ์นด๋ฉ๋ผ ์๋จ์, ๋ฐ๋์ฒด ๊ณต์ ๋ฐ ๋์คํ๋ ์ด ๊ณต์ ๊ธฐ์ ์ ์ด์ฉํ์ฌ ๊ตฌํ๋ 10 ร 10 ร 1.8 mm3 ํฌ๊ธฐ์ ์ํ ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ๋ฅผ ์ฝ์
ํจ์ผ๋ก์จ ์ ์ฒด ์ผ์ ํฌ๊ธฐ๋ฅผ ์ํํ ์์ผฐ์ผ๋ฉฐ ๊ณต์ ์ ํ๋๋ฅผ ๋์๋ค. ๊ฐ๊ฒฉ์ ์ธ ์ธก๋ฉด์์๋ ๊ธฐ์กด ๊ฑฐ๋ฆฌ ์ผ์๋ค๊ณผ ๋น๊ตํด ๋งค์ฐ ๋ฎ์์ก์ผ๋ฉฐ FHD๊ธ ์นด๋ฉ๋ผ๋ฅผ ์ฌ์ฉํ์ฌ ์์์ ํ์ง์ ๋์๋ค. ๊ฐ๋ฐํ ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ๊ฐ ํ ๋ ์ด์ด์์ ๋์ํ๊ธฐ ๋๋ฌธ์ ์ ๋ ฌ์ ์ค์ฐจ์์ ์ค๋ ๊ดํ ์์ฐจ๋ฅผ ์ค์ผ ์ ์๊ณ ๊ธฐ๊ณ์ ์ผ๋ก ์์ง์ด๋ ๋ถ๋ถ์ด ์์ด ์ ๋ขฐ์ฑ์ด ์ข์ผ๋ฉฐ ๋ถํธํ๋ ์กฐ๋ฆฌ๊ฐ, ์ปฌ๋ฌ ํํฐ๋ฅผ ์ด์ฉํ ์กฐ๋ฆฌ๊ฐ, ๊ฐ์๊ด, ์ ์ธ์ ํํฐ๋ฅผ ์ด์ฉํ ์ด์ค ์กฐ๋ฆฌ๊ฐ ๋ฑ ์กฐ๋ฆฌ๊ฐ๋ฅผ ์ด์ฉํ ๋ค๋ฅธ DFD ๋ฐฉ์๋ณด๋ค ๊ฑฐ๋ฆฌ ์ธก์ ๊ฐ๋ฅ ๋ฒ์๊ฐ ์ปค์ ์๋์ฐจ ์ฉ์ผ๋ก ์ฌ์ฉ๋ ์ ์์ผ๋ฉฐ ์ด๋ฏธ์ง์ ํ์ฒ๋ฆฌ ์์ด ์ ๋ช
ํ ์์์ ๋ฐ๋ก ์ป์ ์ ์๋ค.
๊ฑฐ๋ฆฌ ์ผ์๋ ์์จ์ฃผํ์ฐจ์ ์ ์ฉ๋์ด ์ถฉ๋ ๋ฐฉ์ง ๊ฒฝ๊ณ , ์ฌ๊ฐ ์ง๋ ๊ฒ์ถ, ๋ณดํ์ ๊ฒ์ถ ๋ฐ ๊ฑฐ๋ฆฌ ์ธ์, ์ฃผ์ฐจ ๋ณด์กฐ ๋ฑ์ ๊ธฐ๋ฅ์ ํ ์ ์์ผ๋ฉฐ ๊ทธ ์ธ ๋ก๋ด, ๋๋ก , ๋ชจ๋ฐ์ผ์ฉ ์นด๋ฉ๋ผ, ๊ฒ์ ์ฐ์
, ์ฌ๋ฌผ์ธํฐ๋ท ๋ฑ๊ณผ ๊ฐ์ด ์ํ ์นด๋ฉ๋ผ๊ฐ ์ฝ์
๋์ด ๊ฑฐ๋ฆฌ ์ธก์ ์ ํ์๋ก ํ ์ฌ๋ฌ ์์ฉ ๋ถ์ผ์์ ์ฌ์ฉ๋ ์ ์๋ค.Recently, autonomous car is rapidly developing with the distance sensing technology. There is LIDAR, radar, stereo vision, and algorithm-based monovision cameras, but these sensors are bulk or expensive. Thats why these sensors are not yet popularly used in many vehicles for autonomous car. To overcome these major problems, this study provides the image and distance information at the same time by implementing the distance sensor by simply inserting a tunable aperture in front of the same size camera as dash cam.
The distance sensor of this study consists of the tunable aperture with f/1.8 and f/4.0 and the camera module with focal length of 8 mm, field of view of 45ยฐ, and FHD resolution. When a driving voltage is applied to the tunable aperture, the tunable aperture changes according to the voltage. The camera module assembled with the tunable aperture can obtain two images with two different depth of field. Depth of field difference between two images increases linearly with distance, and this is confirmed through simulation and experiment. The distance information can be extracted through the difference in the depth of field of images. Additionally, the deep learning algorithm such as detector algorithm and depth map algorithm can increase the accuracy of the distance. When the detector algorithm was applied, the average error is 0.826 m in the 50 m range when the vehicle was stopped during the day. In the case of depth map algorithm, the error of the object area in the 70 m range during the day is 0.619 m in the stationary situation and 1.000 m in the driving situation. The image taken at night has an error of 5.470 m for the object area in the 40 m range. The distance sensor system can measure the distances in real time of 30 fps at low power by tunable aperture based on LCD method for low operating voltage of 2.64 V and fast response time of 10.59 ms,
The distance sensor improves the distance accuracy by using two apertures instead of just one aperture in a single camera. This sensor has the same size as a dashboard camera with a 1/2.7 inch image sensor by using small variable aperture of 10 ร 10 ร 1.8 mm3 by semiconductor fabrication and display fabrication, which is realized to reduce the overall distance sensor size and improve fabrication accuracy. Also, the price is much lower than existing distance sensors, and the FHD camera is used to improve image quality. Since the tunable aperture operates in one layer, it can reduce optical aberration resulting from misalignment. The sensor could be highly reliable due to no moving mechanical parts. Unlike the distance sensors using other apertures such as coded aperture, aperture using color filter, and dual aperture using visible and infrared filter, clear image is obtained without recovery process.
The distance sensor is applied to autonomous vehicles for collision avoidance warning, blind spot detection, pedestrian detection, and parking assistance. It is also suitable to the other applications such as robots, drones, mobile cameras, gaming industry and the Internet of Things.์ 1 ์ฅ ์ ๋ก 1
์ 1 ์ ์ฐ๊ตฌ์ ๋ฐฐ๊ฒฝ 1
์ 2 ์ ์ ํ ์ฐ๊ตฌ 4
1.2.1 ๊ฑฐ๋ฆฌ ์ธก์ ๋ฐฉ์์ ์ข
๋ฅ 4
1.2.2 ๋ค์ํ ๋ฐฉ์์ ์ํ ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ 12
์ 3 ์ ์ฐ๊ตฌ์ ๋ชฉ์ 19
์ 2 ์ฅ ์ก์ ๋์คํ๋ ์ด ๋ฐฉ์์ ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ 23
์ 1 ์ ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ์ ๋์ ์๋ฆฌ 23
์ 2 ์ ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ ์ค๊ณ 27
์ 3 ์ ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ ์ ์ 29
์ 4 ์ ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ ์ ์ 33
์ 3 ์ฅ ๊ฑฐ๋ฆฌ ์ธก์ ์์คํ
40
์ 1 ์ ๊ฑฐ๋ฆฌ ์ธก์ ์ ์๋ฆฌ 40
์ 2 ์ ๊ดํ ์์คํ
์ ํ๋ผ๋ฏธํฐ ์ ์ 42
์ 3 ์ ์๋ฎฌ๋ ์ด์
์ ํตํ ๋ธ๋ฌ ์์ธก 48
์ 4 ์ ๊ดํ๊ณ ๊ฐ๋ฐ 52
์ 5 ์ ๊ฐ๋ณ์กฐ๋ฆฌ๊ฐ์ ๋ ์ฆ์ ๋จ์ผ ๊ธฐํ ์ง์ ํ 57
3.5.1 ์จ์ดํผ ๋ ๋ฒจ์ ๋ ์ฆ ์ฐ๊ตฌ ๋ํฅ 57
3.5.2 ์จ์ดํผ ๋ ๋ฒจ์ ์ค๋ชฉ๋ ์ฆ ์ค๊ณ ๋ฐ ์คํ 62
3.5.3 ์ง์ ํ ๊ณต์ ๋ฐ ๊ฒฐ๊ณผ 67
์ 6 ์ ์ด์
๋ธ๋ฆฌ 77
์ 4 ์ฅ ๊ฑฐ๋ฆฌ ์ธก์ ์คํ 84
์ 1 ์ ์คํ ํ๊ฒฝ ๊ตฌ์ถ 84
์ 2 ์ ์์ ํ๋ 87
์ 3 ์ ๊ฒฐ๊ณผ ๋ฐ ๋ถ์ 91
4.3.1 DFD ์๊ณ ๋ฆฌ์ฆ์ ์ด์ฉํ ๊ฒฐ๊ณผ ๋ถ์ 91
4.3.2 ๋ฅ๋ฌ๋์ ์ด์ฉํ ๊ฒฐ๊ณผ ๋ถ์ 93
์ 5 ์ฅ ๊ฒฐ ๋ก 106
์ฐธ๊ณ ๋ฌธํ 109
Abstract 125Docto
Filtering of image sequences: on line edge detection and motion reconstruction
L'argomento della Tesi riguarda lรญelaborazione di sequenze di immagini, relative ad una
scena in cui uno o piห oggetti (possibilmente deformabili) si muovono e acquisite da un
opportuno strumento di misura. A causa del processo di misura, le immagini sono corrotte da
un livello di degradazione. Si riporta la formalizzazione matematica dellรญinsieme delle
immagini considerate, dellรญinsieme dei moti ammissibili e della degradazione introdotta dallo
strumento di misura. Ogni immagine della sequenza acquisita ha una relazione con tutte le
altre, stabilita dalla legge del moto della scena. Lรญidea proposta in questa Tesi ร quella di
sfruttare questa relazione tra le diverse immagini della sequenza per ricostruire grandezze di
interesse che caratterizzano la scena.
Nel caso in cui si conosce il moto, lรญinteresse ร quello di ricostruire i contorni dellรญimmagine
iniziale (che poi possono essere propagati attraverso la stessa legge del moto, in modo da
ricostruire i contorni della generica immagine appartenente alla sequenza in esame), stimando
lรญampiezza e del salto del livello di grigio e la relativa localizzazione.
Nel caso duale si suppone invece di conoscere la disposizione dei contorni nellรญimmagine
iniziale e di avere un modello stocastico che descriva il moto; lรญobiettivo ร quindi stimare i
parametri che caratterizzano tale modello.
Infine, si presentano i risultati dellรญapplicazione delle due metodologie succitate a dati reali
ottenuti in ambito biomedicale da uno strumento denominato pupillometro. Tali risultati sono
di elevato interesse nellรญottica di utilizzare il suddetto strumento a fini diagnostici
Recommended from our members
Post-production of holoscopic 3D image
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University LondonHoloscopic 3D imaging also known as โIntegral imagingโ was first proposed by Lippmann in 1908. It facilitates a promising technique for creating full colour spatial image that exists in space. It promotes a single lens aperture for recording spatial images of a real scene, thus it offers omnidirectional motion parallax and true 3D
depth, which is the fundamental feature for digital refocusing. While stereoscopic and multiview 3D imaging systems simulate human eye technique, holoscopic 3D imaging system mimics flyโs eye technique, in which
viewpoints are orthographic projection. This system enables true 3D representation of a real scene in space, thus it offers richer spatial cues compared to stereoscopic 3D and multiview 3D systems. Focus has been the greatest challenge since the beginning of photography. It is becoming even more critical in film production where focus pullers are finding it difficult to get the right focus with camera resolution becoming increasingly higher. Holoscopic 3D imaging enables the user to carry out re/focusing in post-production. There have been three main types of digital refocusing methods namely Shift and Integration, full resolution, and full resolution with blind. However, these methods suffer from artifacts and unsatisfactory resolution in the final resulting image. For instance the artifacts are in the form of blocky and blurry pictures, due to unmatched boundaries. An upsampling method is proposed that improves the resolution of the resulting image of shift and integration approach. Sub-pixel adjustment of elemental images including โupsampling techniqueโ with smart filters are proposed to reduce the artifacts, introduced by full resolution with blind method as well as to improve both image quality and resolution of the final rendered image. A novel 3D object extraction method is proposed that takes advantage of disparity, which is also applied to generate stereoscopic 3D images from holoscopic 3D
image. Cross correlation matching algorithm is used to obtain the disparity map from the disparity information and the desirable object is then extracted. In addition, 3D image conversion algorithm is proposed for the generation of stereoscopic and multiview 3D images from both unidirectional and omnidirectional holoscopic 3D images, which facilitates 3D content reformation