9 research outputs found
Recommended from our members
Holoscopic 3D imaging and display technology: Camera/ processing/ display
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonHoloscopic 3D imaging “Integral imaging” was first proposed by Lippmann in 1908. It has become an attractive technique for creating full colour 3D scene that exists in space. It promotes a single camera aperture for recording spatial information of a real scene and it uses a regularly spaced microlens arrays to simulate the principle of Fly’s eye technique, which creates physical duplicates of light field “true 3D-imaging technique”.
While stereoscopic and multiview 3D imaging systems which simulate human eye technique are widely available in the commercial market, holoscopic 3D imaging technology is still in the research phase. The aim of this research is to investigate spatial resolution of holoscopic 3D imaging and display technology, which includes holoscopic 3D camera, processing and display.
Smart microlens array architecture is proposed that doubles spatial resolution of holoscopic 3D camera horizontally by trading horizontal and vertical resolutions. In particular, it overcomes unbalanced pixel aspect ratio of unidirectional holoscopic 3D images. In addition, omnidirectional holoscopic 3D computer graphics rendering techniques are proposed that simplify the rendering complexity and facilitate holoscopic 3D content generation.
Holoscopic 3D image stitching algorithm is proposed that widens overall viewing angle of holoscopic 3D camera aperture and pre-processing of holoscopic 3D image filters are proposed for spatial data alignment and 3D image data processing. In addition, Dynamic hyperlinker tool is developed that offers interactive holoscopic 3D video content search-ability and browse-ability.
Novel pixel mapping techniques are proposed that improves spatial resolution and visual definition in space. For instance, 4D-DSPM enhances 3D pixels per inch from 44 3D-PPIs to 176 3D-PPIs horizontally and achieves spatial resolution of 1365 Ă— 384 3D-Pixels whereas the traditional spatial resolution is 341 Ă— 1536 3D-Pixels. In addition distributed pixel mapping is proposed that improves quality of holoscopic 3D scene in space by creating RGB-colour channel elemental images
Adopting multiview pixel mapping for enhancing quality of holoscopic 3D scene in parallax barriers based holoscopic 3D displays
The Autostereoscopic multiview 3D Display is robustly developed and widely available in commercial markets. Excellent improvements are made using pixel mapping techniques and achieved an acceptable 3D resolution with balanced pixel aspect ratio in lens array technology. This paper proposes adopting multiview pixel mapping for enhancing quality constructed holoscopic 3D scene in parallax barriers based holoscopic 3D displays achieving great results. The Holoscopic imaging technology mimics the imaging system of insects, such as the fly, utilizing a single camera, equipped with a large number of micro-lenses, to capture a scene, offering rich parallax information and enhanced 3D feeling without the need of wearing specific eyewear. In addition pixel mapping and holoscopic 3D rendering tools are developed including a custom built holoscopic 3D displays to test the proposed method and carry out a like-to-like comparison.This work has been supported by European Commission under Grant FP7-ICT-2009-4 (3DVIVANT). The authors wish to ex-press their gratitude and thanks for the support given throughout the project
Recommended from our members
Moiré-Free Full Parallax Holoscopic 3D Display based on Cross-Lenticular
Holoscopic imaging also known as Integral imaging is a promising 3D solution that mimics the imaging system of insects, such as the fly, utilizing a single camera, equipped with a large number of microlens array, to capture a scene, offering rich parallax information and enhanced 3D feeling without the need of wearing specific eyewear. Recently, initial developments are made for designing a full parallax holoscopic 3D display using parallax barriers which suffers low lighting throughput as the constructed 3D scene is a rather dim. Also a first attempt was made designing an omnidirectional holoscopic 3D display using cross-lenticular which introduces moiré effect. This paper proposes and presents a moiré-free full parallax holoscopic 3D display which offers omnidirectional motion parallax and complete 3D depth
Recommended from our members
Holoscopic 3D image depth estimation and segmentation techniques
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonToday’s 3D imaging techniques offer significant benefits over conventional 2D imaging techniques. The presence of natural depth information in the scene affords the observer an overall improved sense of reality and naturalness. A variety of systems attempting to reach this goal have been designed by many independent research groups, such as stereoscopic and auto-stereoscopic systems. Though the images displayed by such systems tend to cause eye strain, fatigue and headaches after prolonged viewing as users are required to focus on the screen plane/accommodation to converge their eyes to a point in space in a different plane/convergence. Holoscopy is a 3D technology that targets overcoming the above limitations of current 3D technology and was recently developed at Brunel University. This work is part W4.1 of the 3D VIVANT project that is funded by the EU under the ICT program and coordinated by Dr. Aman Aggoun at Brunel University, West London, UK. The objective of the work described in this thesis is to develop estimation and segmentation techniques that are capable of estimating precise 3D depth, and are applicable for holoscopic 3D imaging system. Particular emphasis is given to the task of automatic techniques i.e. favours algorithms with broad generalisation abilities, as no constraints are placed on the setting. Algorithms that provide invariance to most appearance based variation of objects in the scene (e.g. viewpoint changes, deformable objects, presence of noise and changes in lighting). Moreover, have the ability to estimate depth information from both types of holoscopic 3D images i.e. Unidirectional and Omni-directional which gives horizontal parallax and full parallax (vertical and horizontal), respectively. The main aim of this research is to develop 3D depth estimation and 3D image segmentation techniques with great precision. In particular, emphasis on automation of thresholding techniques and cues identifications for development of robust algorithms. A method for depth-through-disparity feature analysis has been built based on the existing correlation between the pixels at a one micro-lens pitch which has been exploited to extract the viewpoint images (VPIs). The corresponding displacement among the VPIs has been exploited to estimate the depth information map via setting and extracting reliable sets of local features. ii Feature-based-point and feature-based-edge are two novel automatic thresholding techniques for detecting and extracting features that have been used in this approach. These techniques offer a solution to the problem of setting and extracting reliable features automatically to improve the performance of the depth estimation related to the generalizations, speed and quality. Due to the resolution limitation of the extracted VPIs, obtaining an accurate 3D depth map is challenging. Therefore, sub-pixel shift and integration is a novel interpolation technique that has been used in this approach to generate super-resolution VPIs. By shift and integration of a set of up-sampled low resolution VPIs, the new information contained in each viewpoint is exploited to obtain a super resolution VPI. This produces a high resolution perspective VPI with wide Field Of View (FOV). This means that the holoscopic 3D image system can be converted into a multi-view 3D image pixel format. Both depth accuracy and a fast execution time have been achieved that improved the 3D depth map. For a 3D object to be recognized the related foreground regions and depth information map needs to be identified. Two novel unsupervised segmentation methods that generate interactive depth maps from single viewpoint segmentation were developed. Both techniques offer new improvements over the existing methods due to their simple use and being fully automatic; therefore, producing the 3D depth interactive map without human interaction. The final contribution is a performance evaluation, to provide an equitable measurement for the extent of the success of the proposed techniques for foreground object segmentation, 3D depth interactive map creation and the generation of 2D super-resolution viewpoint techniques. The no-reference image quality assessment metrics and their correlation with the human perception of quality are used with the help of human participants in a subjective manner
Recommended from our members
Post-production of holoscopic 3D image
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University LondonHoloscopic 3D imaging also known as “Integral imaging” was first proposed by Lippmann in 1908. It facilitates a promising technique for creating full colour spatial image that exists in space. It promotes a single lens aperture for recording spatial images of a real scene, thus it offers omnidirectional motion parallax and true 3D
depth, which is the fundamental feature for digital refocusing. While stereoscopic and multiview 3D imaging systems simulate human eye technique, holoscopic 3D imaging system mimics fly’s eye technique, in which
viewpoints are orthographic projection. This system enables true 3D representation of a real scene in space, thus it offers richer spatial cues compared to stereoscopic 3D and multiview 3D systems. Focus has been the greatest challenge since the beginning of photography. It is becoming even more critical in film production where focus pullers are finding it difficult to get the right focus with camera resolution becoming increasingly higher. Holoscopic 3D imaging enables the user to carry out re/focusing in post-production. There have been three main types of digital refocusing methods namely Shift and Integration, full resolution, and full resolution with blind. However, these methods suffer from artifacts and unsatisfactory resolution in the final resulting image. For instance the artifacts are in the form of blocky and blurry pictures, due to unmatched boundaries. An upsampling method is proposed that improves the resolution of the resulting image of shift and integration approach. Sub-pixel adjustment of elemental images including “upsampling technique” with smart filters are proposed to reduce the artifacts, introduced by full resolution with blind method as well as to improve both image quality and resolution of the final rendered image. A novel 3D object extraction method is proposed that takes advantage of disparity, which is also applied to generate stereoscopic 3D images from holoscopic 3D
image. Cross correlation matching algorithm is used to obtain the disparity map from the disparity information and the desirable object is then extracted. In addition, 3D image conversion algorithm is proposed for the generation of stereoscopic and multiview 3D images from both unidirectional and omnidirectional holoscopic 3D images, which facilitates 3D content reformation
3D Depth Measurement for Holoscopic 3D Imaging System
Holoscopic 3D imaging is a true 3D imaging system mimics fly’s eye technique to acquire a true 3D
optical model of a real scene. To reconstruct the 3D image computationally, an efficient implementation
of an Auto-Feature-Edge (AFE) descriptor algorithm is required that provides an individual
feature detector for integration of 3D information to locate objects in the scene. The AFE
descriptor plays a key role in simplifying the detection of both edge-based and region-based objects.
The detector is based on a Multi-Quantize Adaptive Local Histogram Analysis (MQALHA) algorithm.
This is distinctive for each Feature-Edge (FE) block i.e. the large contrast changes (gradients)
in FE are easier to localise. The novelty of this work lies in generating a free-noise 3D-Map
(3DM) according to a correlation analysis of region contours. This automatically combines the exploitation
of the available depth estimation technique with edge-based feature shape recognition
technique. The application area consists of two varied domains, which prove the efficiency and
robustness of the approach: a) extracting a set of setting feature-edges, for both tracking and
mapping process for 3D depthmap estimation, and b) separation and recognition of focus objects
in the scene. Experimental results show that the proposed 3DM technique is performed efficiently
compared to the state-of-the-art algorithms
Recommended from our members
3D Pixel Mapping for LED Holoscpic 3D wall Display
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonIn recent years, 3D displays have been recognized as the ultimate dream of immersive display technology and there have been a great development immersive 3D technology including AR/VR and auto-stereoscopic 3D displays. Holoscopic 3D (H3D) system is one of the autostereoscopic 3D which is a true 3D imaging principle which mimics fly’s eye technique to capture and replay using a micro lens array which is an array of perspective lens of the same specification. LED wall display has shown a fast growth where LED digital displays are widely used in both in/outdoor for advertisement and entertainment. Ultra-big LED display monitor is an ideal hardware device to provide remarkable 3D viewing experience and fit numbers of viewers to perceive 3D effects at same time. However, compare with existing 3D technologies which successfully applied on LCD display monitor, LED display still suffers from resolution when applied pixel mapping method which uses number of 2D pixels to construct a 3D pixel. In this PhD research, an innovative 3D pixel mapping was explored and designed to enhance 3D viewing experience in horizontal direction of LED 3D Wall-size display. In particular, an innovative Holoscopic 3D imaging principle is used to design and prototype LED 3D Wall display of resolution enhancement. Compare with the classic 3D display method, this enhanced display method of LED display improved horizontal resolution double times without losing any viewpoints. The outcome research is promising as a good depth and motion parallax for medium to long distance viewing are achieved. In addition to the aforementioned, to improve the quality of rendered 3D images of LED display in omnidirectional directions, a distributed pixel mapping algorithm was designed to reduce the lens pitch three times to gain smoother motion parallax of rendered 3D images compare with traditional pixel mapping method in omnidirectional direction. Unfortunately, due to lack of high-resolution LED display monitor, this distributed pixel mapping method was
eventually tested and evaluated on LCD display with 4K resolution
Recommended from our members
Camera positioning for 3D panoramic image rendering
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University London.Virtual camera realisation and the proposition of trapezoidal camera architecture are the two broad contributions of this thesis. Firstly, multiple camera and their arrangement constitute a critical component which affect the integrity of visual content acquisition for multi-view video. Currently, linear, convergence, and divergence arrays are the prominent camera topologies adopted. However, the large number of cameras required and their synchronisation are two of prominent challenges usually encountered. The use of virtual cameras can significantly reduce the number of physical cameras used with respect to any of the known
camera structures, hence adequately reducing some of the other implementation issues. This thesis explores to use image-based rendering with and without geometry in the implementations leading to the realisation of virtual cameras. The virtual camera implementation was carried out from the perspective of depth map (geometry) and use of multiple image samples (no geometry). Prior to the virtual camera realisation, the generation of depth map was investigated using region match measures widely known for solving image point correspondence problem. The constructed depth maps have been compare with the ones generated
using the dynamic programming approach. In both the geometry and no geometry approaches, the virtual cameras lead to the rendering of views from a textured depth map, construction of 3D panoramic image of a scene by stitching multiple image samples and performing superposition on them, and computation
of virtual scene from a stereo pair of panoramic images. The quality of these rendered images were assessed through the use of either objective or subjective analysis in Imatest software. Further more, metric reconstruction of a scene was performed by re-projection of the pixel points from multiple image samples with
a single centre of projection. This was done using sparse bundle adjustment algorithm. The statistical summary obtained after the application of this algorithm provides a gauge for the efficiency of the optimisation step. The optimised data was then visualised in Meshlab software environment, hence providing the reconstructed scene. Secondly, with any of the well-established camera arrangements, all cameras are usually constrained to the same horizontal plane. Therefore, occlusion becomes an extremely challenging problem, and a robust camera set-up is required in order to resolve strongly the hidden part of any scene objects.
To adequately meet the visibility condition for scene objects and given that occlusion of the same scene objects can occur, a multi-plane camera structure is highly desirable. Therefore, this thesis also explore trapezoidal camera structure for image acquisition. The approach here is to assess the feasibility and potential
of several physical cameras of the same model being sparsely arranged on the edge of an efficient trapezoid graph. This is implemented both Matlab and Maya. The quality of the depth maps rendered in Matlab are better in Quality
Omnidirectional Holoscopic 3D content generation using dual orthographic projection
In recent years there has been a considerable amount of development work been made in the area of Three-Dimensional (3D) imaging systems and displays. Such systems have attracted the attention and have been widely consumed by both home and professional users in sectors such as entertainment and medicine. However, computer generated 3D content remains a challenge as the 3D scene construction requires contributions from thousands of micro images “also known as elemental images”. Rendering microlens images is very time-consuming because each microlens image is rendered by a perspective or orthographic pinhole camera in a computer generated environment. In this paper we propose and present the development of a new method to simplify and speed-up the rendering process in computer graphics. We also describe omnidirectional 3D image recoding using a two-layer orthographic camera. Results show that it's rendering performance makes it an ideal candidate for real-time/interactive 3D content visualization application(s)