11 research outputs found
Innovative 3D Depth Map Generation From A Holoscopic 3D Image Based on Graph Cut Technique
Holoscopic 3D imaging is a promising technique for capturing full-colour spatial 3D images using a single aperture holoscopic 3D camera. It mimics fly’s eye technique with a microlens array, which views the scene at a slightly different angle to its adjacent lens that records three-dimensional information onto a two-dimensional surface. This paper proposes a method of depth map generation from a holoscopic 3D image based on graph cut technique. The principal objective of this study is to estimate the depth information presented in a holoscopic 3D image with high precision. As such, depth map extraction is measured from a single still holoscopic 3D image which consists of multiple viewpoint images. The viewpoints are extracted and utilised for disparity calculation via disparity space image technique and pixels displacement is measured with sub-pixel accuracy to overcome the issue of the narrow baseline between the viewpoint images for stereo matching. In addition, cost aggregation is used to correlate the matching costs within a particular neighbouring region using sum of absolute difference (SAD) combined with gradient-based metric and “winner takes all” algorithm is employed to select the minimum elements in the array as optimal disparity value. Finally, the optimal depth map is obtained using graph cut technique. The proposed method extends the utilisation of holoscopic 3D imaging system and enables the expansion of the technology for various applications of autonomous robotics, medical, inspection, AR/VR, security and entertainment where 3D depth sensing and measurement are a concern
Recommended from our members
Holoscopic 3D image depth estimation and segmentation techniques
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonToday’s 3D imaging techniques offer significant benefits over conventional 2D imaging techniques. The presence of natural depth information in the scene affords the observer an overall improved sense of reality and naturalness. A variety of systems attempting to reach this goal have been designed by many independent research groups, such as stereoscopic and auto-stereoscopic systems. Though the images displayed by such systems tend to cause eye strain, fatigue and headaches after prolonged viewing as users are required to focus on the screen plane/accommodation to converge their eyes to a point in space in a different plane/convergence. Holoscopy is a 3D technology that targets overcoming the above limitations of current 3D technology and was recently developed at Brunel University. This work is part W4.1 of the 3D VIVANT project that is funded by the EU under the ICT program and coordinated by Dr. Aman Aggoun at Brunel University, West London, UK. The objective of the work described in this thesis is to develop estimation and segmentation techniques that are capable of estimating precise 3D depth, and are applicable for holoscopic 3D imaging system. Particular emphasis is given to the task of automatic techniques i.e. favours algorithms with broad generalisation abilities, as no constraints are placed on the setting. Algorithms that provide invariance to most appearance based variation of objects in the scene (e.g. viewpoint changes, deformable objects, presence of noise and changes in lighting). Moreover, have the ability to estimate depth information from both types of holoscopic 3D images i.e. Unidirectional and Omni-directional which gives horizontal parallax and full parallax (vertical and horizontal), respectively. The main aim of this research is to develop 3D depth estimation and 3D image segmentation techniques with great precision. In particular, emphasis on automation of thresholding techniques and cues identifications for development of robust algorithms. A method for depth-through-disparity feature analysis has been built based on the existing correlation between the pixels at a one micro-lens pitch which has been exploited to extract the viewpoint images (VPIs). The corresponding displacement among the VPIs has been exploited to estimate the depth information map via setting and extracting reliable sets of local features. ii Feature-based-point and feature-based-edge are two novel automatic thresholding techniques for detecting and extracting features that have been used in this approach. These techniques offer a solution to the problem of setting and extracting reliable features automatically to improve the performance of the depth estimation related to the generalizations, speed and quality. Due to the resolution limitation of the extracted VPIs, obtaining an accurate 3D depth map is challenging. Therefore, sub-pixel shift and integration is a novel interpolation technique that has been used in this approach to generate super-resolution VPIs. By shift and integration of a set of up-sampled low resolution VPIs, the new information contained in each viewpoint is exploited to obtain a super resolution VPI. This produces a high resolution perspective VPI with wide Field Of View (FOV). This means that the holoscopic 3D image system can be converted into a multi-view 3D image pixel format. Both depth accuracy and a fast execution time have been achieved that improved the 3D depth map. For a 3D object to be recognized the related foreground regions and depth information map needs to be identified. Two novel unsupervised segmentation methods that generate interactive depth maps from single viewpoint segmentation were developed. Both techniques offer new improvements over the existing methods due to their simple use and being fully automatic; therefore, producing the 3D depth interactive map without human interaction. The final contribution is a performance evaluation, to provide an equitable measurement for the extent of the success of the proposed techniques for foreground object segmentation, 3D depth interactive map creation and the generation of 2D super-resolution viewpoint techniques. The no-reference image quality assessment metrics and their correlation with the human perception of quality are used with the help of human participants in a subjective manner
Recommended from our members
Post-production of holoscopic 3D image
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University LondonHoloscopic 3D imaging also known as “Integral imaging” was first proposed by Lippmann in 1908. It facilitates a promising technique for creating full colour spatial image that exists in space. It promotes a single lens aperture for recording spatial images of a real scene, thus it offers omnidirectional motion parallax and true 3D
depth, which is the fundamental feature for digital refocusing. While stereoscopic and multiview 3D imaging systems simulate human eye technique, holoscopic 3D imaging system mimics fly’s eye technique, in which
viewpoints are orthographic projection. This system enables true 3D representation of a real scene in space, thus it offers richer spatial cues compared to stereoscopic 3D and multiview 3D systems. Focus has been the greatest challenge since the beginning of photography. It is becoming even more critical in film production where focus pullers are finding it difficult to get the right focus with camera resolution becoming increasingly higher. Holoscopic 3D imaging enables the user to carry out re/focusing in post-production. There have been three main types of digital refocusing methods namely Shift and Integration, full resolution, and full resolution with blind. However, these methods suffer from artifacts and unsatisfactory resolution in the final resulting image. For instance the artifacts are in the form of blocky and blurry pictures, due to unmatched boundaries. An upsampling method is proposed that improves the resolution of the resulting image of shift and integration approach. Sub-pixel adjustment of elemental images including “upsampling technique” with smart filters are proposed to reduce the artifacts, introduced by full resolution with blind method as well as to improve both image quality and resolution of the final rendered image. A novel 3D object extraction method is proposed that takes advantage of disparity, which is also applied to generate stereoscopic 3D images from holoscopic 3D
image. Cross correlation matching algorithm is used to obtain the disparity map from the disparity information and the desirable object is then extracted. In addition, 3D image conversion algorithm is proposed for the generation of stereoscopic and multiview 3D images from both unidirectional and omnidirectional holoscopic 3D images, which facilitates 3D content reformation
Innovative 3D Depth Map Generation From A Holoscopic 3D Image Based on Graph Cut Technique
Holoscopic 3D imaging is a promising technique for capturing full-colour spatial 3D images using a single aperture holoscopic 3D camera. It mimics fly’s eye technique with a microlens array, which views the scene at a slightly different angle to its adjacent lens that records three-dimensional information onto a two-dimensional surface. This paper proposes a method of depth map generation from a holoscopic 3D image based on graph cut technique. The principal objective of this study is to estimate the depth information presented in a Holoscopic 3D image with high precision. As such, depth map extraction is measured from a single still holoscopic 3D image which consists of multiple viewpoint images. The viewpoints are extracted and utilised for disparity calculation via disparity space image technique and pixels displacement is measured with sub-pixel accuracy to overcome the issue of the narrow baseline between the viewpoint images for stereo matching. In addition, cost aggregation is used to correlate the matching costs within a particular neighbouring region using sum of absolute difference (SAD) combined with gradient-based metric and “winner takes all” algorithm is employed to select the minimum elements in the array as optimal disparity value. Finally, the optimal depth map is obtained using graph cut technique. The proposed method extends the utilisation of holoscopic 3D imaging system and enables the expansion of the technology for various applications of autonomous robotics, medical, inspection, AR/VR, security and entertainment where 3D depth sensing and measurement are a concern.NPR
Recommended from our members
Depth Estimation from a Single Holoscopic 3D Image and Image Up-sampling with Deep-learning
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University London3D depth information is widely utilized in industries such as security, autonomous vehicles, robotics, 3D printing, AR/VR entertainment, cinematography and medical science. However, state-of-the-art imaging and 3D depth-sensing technologies are rather complicated or expensive and still lack scalability and interoperability. The research identified, entails the development of an innovative technique for reliable and efficient 3D depth estimation that deliver better accuracy. The proposed (1) multilayer Holoscopic 3D encoding technique reduces the computational cost of extracting viewpoint images from complex structured Holoscopic 3D data by 95%, by using labelled multilayer elemental images. It also addresses misplacement of elemental image pixels due to lens distortion error. The multilayer Holoscopic 3D encoding computing efficiency leads to the implementation of real-time 3D depth-dependent applications. Also, (2) an innovative approach of a deep learning-based single image super-resolution framework is developed and evaluated. It identified that learning-based image up-sampling techniques could be used regardless of inadequate 3D training data, as 2D training data can yield the same results.
(3) The research is extended further by implementation of an H3D depth disparity -based framework, where a Holoscopic content adaptation technique for extracting semi-segmented stereo viewpoint image is introduced, and the design of a smart 3D depth mapping technique is proposed. Particularly, it provides a somewhat accurate 3D depth estimation from H3D images in near real-time. Holoscopic 3D image has thousands of perspective elemental images from omnidirectional viewpoint images and (4) a novel 3D depth estimation technique is developed to estimates 3D depth information directly from a single Holoscopic 3D image without the loss of any angular information and the introduction of unwanted artefacts. The proposed 3D depth measurement techniques are computationally efficient and robust with high accuracy; these can be incorporated in real-time applications of autonomous vehicles, security and AR/VR for real-time interaction
3D Depth Measurement for Holoscopic 3D Imaging System
Holoscopic 3D imaging is a true 3D imaging system mimics fly’s eye technique to acquire a true 3D
optical model of a real scene. To reconstruct the 3D image computationally, an efficient implementation
of an Auto-Feature-Edge (AFE) descriptor algorithm is required that provides an individual
feature detector for integration of 3D information to locate objects in the scene. The AFE
descriptor plays a key role in simplifying the detection of both edge-based and region-based objects.
The detector is based on a Multi-Quantize Adaptive Local Histogram Analysis (MQALHA) algorithm.
This is distinctive for each Feature-Edge (FE) block i.e. the large contrast changes (gradients)
in FE are easier to localise. The novelty of this work lies in generating a free-noise 3D-Map
(3DM) according to a correlation analysis of region contours. This automatically combines the exploitation
of the available depth estimation technique with edge-based feature shape recognition
technique. The application area consists of two varied domains, which prove the efficiency and
robustness of the approach: a) extracting a set of setting feature-edges, for both tracking and
mapping process for 3D depthmap estimation, and b) separation and recognition of focus objects
in the scene. Experimental results show that the proposed 3DM technique is performed efficiently
compared to the state-of-the-art algorithms
Dense light field coding: a survey
Light Field (LF) imaging is a promising solution for providing more immersive and closer to reality multimedia experiences to end-users with unprecedented creative freedom and flexibility for applications in different areas, such as virtual and augmented reality. Due to the recent technological advances in optics, sensor manufacturing and available transmission bandwidth, as well as the investment of many tech giants in this area, it is expected that soon many LF transmission systems will be available to both consumers and professionals. Recognizing this, novel standardization initiatives have recently emerged in both the Joint Photographic Experts Group (JPEG) and the Moving Picture Experts Group (MPEG), triggering the discussion on the deployment of LF coding solutions to efficiently handle the massive amount of data involved in such systems.
Since then, the topic of LF content coding has become a booming research area, attracting the attention of many researchers worldwide. In this context, this paper provides a comprehensive survey of the most relevant LF coding solutions proposed in the literature, focusing on angularly dense LFs. Special attention is placed on a thorough description of the different LF coding methods and on the main concepts related to this relevant area. Moreover, comprehensive insights are presented into open research challenges and future research directions for LF coding.info:eu-repo/semantics/publishedVersio
The standard plenoptic camera: applications of a geometrical light field model
A thesis submitted to the University of Bedfordshire, in partial fulfilment of the requirements for the degree of Doctor of PhilosophyThe plenoptic camera is an emerging technology in computer vision able to capture
a light field image from a single exposure which allows a computational change of
the perspective view just as the optical focus, known as refocusing. Until now there was
no general method to pinpoint object planes that have been brought to focus or stereo
baselines of perspective views posed by a plenoptic camera.
Previous research has presented simplified ray models to prove the concept of refocusing
and to enhance image and depth map qualities, but lacked promising distance
estimates and an efficient refocusing hardware implementation. In this thesis, a pair of
light rays is treated as a system of linear functions whose solution yields ray intersections
indicating distances to refocused object planes or positions of virtual cameras that project
perspective views. A refocusing image synthesis is derived from the proposed ray model
and further developed to an array of switch-controlled semi-systolic FIR convolution
filters. Their real-time performance is verified through simulation and implementation by
means of an FPGA using VHDL programming.
A series of experiments is carried out with different lenses and focus settings, where
prediction results are compared with those of a real ray simulation tool and processed
light field photographs for which a blur metric has been considered. Predictions accurately
match measurements in light field photographs and signify deviations of less than 0.35 %
in real ray simulation. A benchmark assessment of the proposed refocusing hardware
implementation suggests a computation time speed-up of 99.91 % in comparison with a
state-of-the-art technique.
It is expected that this research supports in the prototyping stage of plenoptic cameras
and microscopes as it helps specifying depth sampling planes, thus localising objects and
provides a power-efficient refocusing hardware design for full-video applications as in
broadcasting or motion picture arts
Methods for Light Field Display Profiling and Scalable Super-Multiview Video Coding
Light field 3D displays reproduce the light field of real or synthetic scenes, as observed by multiple viewers, without the necessity of wearing 3D glasses. Reproducing light fields is a technically challenging task in terms of optical setup, content creation, distributed rendering, among others; however, the impressive visual quality of hologramlike scenes, in full color, with real-time frame rates, and over a very wide field of view justifies the complexity involved. Seeing objects popping far out from the screen plane without glasses impresses even those viewers who have experienced other 3D displays before.Content for these displays can either be synthetic or real. The creation of synthetic (rendered) content is relatively well understood and used in practice. Depending on the technique used, rendering has its own complexities, quite similar to the complexity of rendering techniques for 2D displays. While rendering can be used in many use-cases, the holy grail of all 3D display technologies is to become the future 3DTVs, ending up in each living room and showing realistic 3D content without glasses. Capturing, transmitting, and rendering live scenes as light fields is extremely challenging, and it is necessary if we are about to experience light field 3D television showing real people and natural scenes, or realistic 3D video conferencing with real eye-contact.In order to provide the required realism, light field displays aim to provide a wide field of view (up to 180°), while reproducing up to ~80 MPixels nowadays. Building gigapixel light field displays is realistic in the next few years. Likewise, capturing live light fields involves using many synchronized cameras that cover the same display wide field of view and provide the same high pixel count. Therefore, light field capture and content creation has to be well optimized with respect to the targeted display technologies. Two major challenges in this process are addressed in this dissertation.The first challenge is how to characterize the display in terms of its capabilities to create light fields, that is how to profile the display in question. In clearer terms this boils down to finding the equivalent spatial resolution, which is similar to the screen resolution of 2D displays, and angular resolution, which describes the smallest angle, the color of which the display can control individually. Light field is formalized as 4D approximation of the plenoptic function in terms of geometrical optics through spatiallylocalized and angularly-directed light rays in the so-called ray space. Plenoptic Sampling Theory provides the required conditions to sample and reconstruct light fields. Subsequently, light field displays can be characterized in the Fourier domain by the effective display bandwidth they support. In the thesis, a methodology for displayspecific light field analysis is proposed. It regards the display as a signal processing channel and analyses it as such in spectral domain. As a result, one is able to derive the display throughput (i.e. the display bandwidth) and, subsequently, the optimal camera configuration to efficiently capture and filter light fields before displaying them.While the geometrical topology of optical light sources in projection-based light field displays can be used to theoretically derive display bandwidth, and its spatial and angular resolution, in many cases this topology is not available to the user. Furthermore, there are many implementation details which cause the display to deviate from its theoretical model. In such cases, profiling light field displays in terms of spatial and angular resolution has to be done by measurements. Measurement methods that involve the display showing specific test patterns, which are then captured by a single static or moving camera, are proposed in the thesis. Determining the effective spatial and angular resolution of a light field display is then based on an automated analysis of the captured images, as they are reproduced by the display, in the frequency domain. The analysis reveals the empirical limits of the display in terms of pass-band both in the spatial and angular dimension. Furthermore, the spatial resolution measurements are validated by subjective tests confirming that the results are in line with the smallest features human observers can perceive on the same display. The resolution values obtained can be used to design the optimal capture setup for the display in question.The second challenge is related with the massive number of views and pixels captured that have to be transmitted to the display. It clearly requires effective and efficient compression techniques to fit in the bandwidth available, as an uncompressed representation of such a super-multiview video could easily consume ~20 gigabits per second with today’s displays. Due to the high number of light rays to be captured, transmitted and rendered, distributed systems are necessary for both capturing and rendering the light field. During the first attempts to implement real-time light field capturing, transmission and rendering using a brute force approach, limitations became apparent. Still, due to the best possible image quality achievable with dense multi-camera light field capturing and light ray interpolation, this approach was chosen as the basis of further work, despite the massive amount of bandwidth needed. Decompression of all camera images in all rendering nodes, however, is prohibitively time consuming and is not scalable. After analyzing the light field interpolation process and the data-access patterns typical in a distributed light field rendering system, an approach to reduce the amount of data required in the rendering nodes has been proposed. This approach, on the other hand, requires rectangular parts (typically vertical bars in case of a Horizontal Parallax Only light field display) of the captured images to be available in the rendering nodes, which might be exploited to reduce the time spent with decompression of video streams. However, partial decoding is not readily supported by common image / video codecs. In the thesis, approaches aimed at achieving partial decoding are proposed for H.264, HEVC, JPEG and JPEG2000 and the results are compared.The results of the thesis on display profiling facilitate the design of optimal camera setups for capturing scenes to be reproduced on 3D light field displays. The developed super-multiview content encoding also facilitates light field rendering in real-time. This makes live light field transmission and real-time teleconferencing possible in a scalable way, using any number of cameras, and at the spatial and angular resolution the display actually needs for achieving a compelling visual experience