17 research outputs found
Recommended from our members
Holoscopic 3D imaging and display technology: Camera/ processing/ display
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonHoloscopic 3D imaging “Integral imaging” was first proposed by Lippmann in 1908. It has become an attractive technique for creating full colour 3D scene that exists in space. It promotes a single camera aperture for recording spatial information of a real scene and it uses a regularly spaced microlens arrays to simulate the principle of Fly’s eye technique, which creates physical duplicates of light field “true 3D-imaging technique”.
While stereoscopic and multiview 3D imaging systems which simulate human eye technique are widely available in the commercial market, holoscopic 3D imaging technology is still in the research phase. The aim of this research is to investigate spatial resolution of holoscopic 3D imaging and display technology, which includes holoscopic 3D camera, processing and display.
Smart microlens array architecture is proposed that doubles spatial resolution of holoscopic 3D camera horizontally by trading horizontal and vertical resolutions. In particular, it overcomes unbalanced pixel aspect ratio of unidirectional holoscopic 3D images. In addition, omnidirectional holoscopic 3D computer graphics rendering techniques are proposed that simplify the rendering complexity and facilitate holoscopic 3D content generation.
Holoscopic 3D image stitching algorithm is proposed that widens overall viewing angle of holoscopic 3D camera aperture and pre-processing of holoscopic 3D image filters are proposed for spatial data alignment and 3D image data processing. In addition, Dynamic hyperlinker tool is developed that offers interactive holoscopic 3D video content search-ability and browse-ability.
Novel pixel mapping techniques are proposed that improves spatial resolution and visual definition in space. For instance, 4D-DSPM enhances 3D pixels per inch from 44 3D-PPIs to 176 3D-PPIs horizontally and achieves spatial resolution of 1365 × 384 3D-Pixels whereas the traditional spatial resolution is 341 × 1536 3D-Pixels. In addition distributed pixel mapping is proposed that improves quality of holoscopic 3D scene in space by creating RGB-colour channel elemental images
Recommended from our members
3D Pixel Mapping for LED Holoscpic 3D wall Display
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonIn recent years, 3D displays have been recognized as the ultimate dream of immersive display technology and there have been a great development immersive 3D technology including AR/VR and auto-stereoscopic 3D displays. Holoscopic 3D (H3D) system is one of the autostereoscopic 3D which is a true 3D imaging principle which mimics fly’s eye technique to capture and replay using a micro lens array which is an array of perspective lens of the same specification. LED wall display has shown a fast growth where LED digital displays are widely used in both in/outdoor for advertisement and entertainment. Ultra-big LED display monitor is an ideal hardware device to provide remarkable 3D viewing experience and fit numbers of viewers to perceive 3D effects at same time. However, compare with existing 3D technologies which successfully applied on LCD display monitor, LED display still suffers from resolution when applied pixel mapping method which uses number of 2D pixels to construct a 3D pixel. In this PhD research, an innovative 3D pixel mapping was explored and designed to enhance 3D viewing experience in horizontal direction of LED 3D Wall-size display. In particular, an innovative Holoscopic 3D imaging principle is used to design and prototype LED 3D Wall display of resolution enhancement. Compare with the classic 3D display method, this enhanced display method of LED display improved horizontal resolution double times without losing any viewpoints. The outcome research is promising as a good depth and motion parallax for medium to long distance viewing are achieved. In addition to the aforementioned, to improve the quality of rendered 3D images of LED display in omnidirectional directions, a distributed pixel mapping algorithm was designed to reduce the lens pitch three times to gain smoother motion parallax of rendered 3D images compare with traditional pixel mapping method in omnidirectional direction. Unfortunately, due to lack of high-resolution LED display monitor, this distributed pixel mapping method was
eventually tested and evaluated on LCD display with 4K resolution
Rendering and display for multi-viewer tele-immersion
Video teleconferencing systems are widely deployed for business, education and personal use to enable face-to-face communication between people at distant sites. Unfortunately, the two-dimensional video of conventional systems does not correctly convey several important non-verbal communication cues such as eye contact and gaze awareness. Tele-immersion refers to technologies aimed at providing distant users with a more compelling sense of remote presence than conventional video teleconferencing. This dissertation is concerned with the particular challenges of interaction between groups of users at remote sites. The problems of video teleconferencing are exacerbated when groups of people communicate. Ideally, a group tele-immersion system would display views of the remote site at the right size and location, from the correct viewpoint for each local user. However, is is not practical to put a camera in every possible eye location, and it is not clear how to provide each viewer with correct and unique imagery. I introduce rendering techniques and multi-view display designs to support eye contact and gaze awareness between groups of viewers at two distant sites. With a shared 2D display, virtual camera views can improve local spatial cues while preserving scene continuity, by rendering the scene from novel viewpoints that may not correspond to a physical camera. I describe several techniques, including a compact light field, a plane sweeping algorithm, a depth dependent camera model, and video-quality proxies, suitable for producing useful views of a remote scene for a group local viewers. The first novel display provides simultaneous, unique monoscopic views to several users, with fewer user position restrictions than existing autostereoscopic displays. The second is a random hole barrier autostereoscopic display that eliminates the viewing zones and user position requirements of conventional autostereoscopic displays, and provides unique 3D views for multiple users in arbitrary locations
Lens Array Based Techniques for 3D Scene Capture and Display
This thesis discusses the use of lens arrays for both capture and display of 3D visual scenes while utilizing the ray optics formalism for modeling the propagation of light. In 3D capture, the use of lens arrays brings the concepts of focused and defocused plenoptic cameras, and in 3D display, the same optical technology brings the integral imaging (InI) and super multiview (SMV) visualization techniques.
Plenoptic cameras combine a lens array with a single sensor in order to capture the light field (LF) emanated by a scene compactly and in a single shot. In the thesis, comparative analysis of focused and defocused plenoptic cameras is carried out in terms of LF sampling and spatio-angular resolution trade-offs. An algorithm for simulating ground-truth plenoptic image data for the case of defocused plenoptic camera is developed and implemented. It models the process of plenoptic capture and makes use of the notion of densely sampled light field (DSLF) for the sake of efficient and reliable data processing.
3D displays aim at visualising 3D scenes as accurate as possible, thus providing natural viewing experience. They are characterised and compared by their ability to correctly reproduce 3D visual cues, such as vergence, binocular disparity, accommodation and motion parallax. Design-wise, lens array based 3D display techniques provide simple yet effective way to correctly deliver all these cues, which makes them attractive in several 3D display applications. The thesis studies SMV and InI techniques in terms of depth perception and resolution trade-offs. Based on the theoretical analysis, a prototype SMV head-up display (HUD) system is developed. It demonstrates a compact and affordable solution for the virtual image presentation HUD problem. The experiments and analyses carried out on the prototype verify the SMV display capabilities for the targeted HUD application
Electrically focus-tuneable ultrathin lens for high-resolution square subpixels.
Owing to the tremendous demands for high-resolution pixel-scale thin lenses in displays, we developed a graphene-based ultrathin square subpixel lens (USSL) capable of electrically tuneable focusing (ETF) with a performance competitive with that of a typical mechanical refractive lens. The fringe field due to a voltage bias in the graphene proves that our ETF-USSL can focus light onto a single point regardless of the wavelength of the visible light-by controlling the carriers at the Dirac point using radially patterned graphene layers, the focal length of the planar structure can be adjusted without changing the curvature or position of the lens. A high focusing efficiency of over 60% at a visible wavelength of 405 nm was achieved with a lens thickness of <13 nm, and a change of 19.42% in the focal length with a 9% increase in transmission was exhibited under a driving voltage. This design is first presented as an ETF-USSL that can be controlled in pixel units of flat panel displays for visible light. It can be easily applied as an add-on to high resolution, slim displays and provides a new direction for the application of multifunctional autostereoscopic displays
High Performance Three-Dimensional Display Based on Polymer-Stabilized Blue Phase Liquid Crystal
Autostereoscopic 2D/3D (two-dimension/three-dimension) switchable display has been attracting great interest in research and practical applications for several years. Among different autostereoscopic solutions, direction-multiplexed 3D displays based on microlens array or parallax barrier are viewed as the most promising candidates, due to their compatibility with conventional 2D display technologies. These 2D/3D switchable display system designs rely on fast switching display panels and photonics devices, including adaptive focus microlens array and switchable slit array. Polymer-stabilized blue phase liquid crystal (PS-BPLC) material provides a possible solution to meet the aforementioned fast response time requirement. However, present display and photonic devices based on blue phase liquid crystals suffer from several drawbacks, such as low contrast ratio, relatively large hysteresis and short lifetime. In this dissertation, we investigate the material properties of PS-BPLC so as to improve the performance of PS-BPLC devices. Then we propose several PS-BPLC devices for the autostereoscopic 2D/3D switchable display system designs. In the first part we evaluate the optical rotatory power (ORP) of blue phase liquid crystal, which is proven to be the primary reason for causing the low contrast ratio of PS-BPLC display systems. Those material parameters affecting the ORP of PS-BPLC are investigated and an empirical equation is proposed to calculate the polarization rotation angle in a PS-BPLC cell. Then several optical compensation methods are proposed to compensate the impact of ORP and to improve the contrast ratio of a display system. The pros and cons of each solution are discussed accordingly. In the second part, we propose two adaptive focus microlens array structures and a high efficiency switchable slit array based on the PS-BPLC materials. By optimizing the design parameters, these devices can be applied to the 2D/3D switchable display systems. In the last section, we focus on another factor that affects the performance and lifetime of PS-BPLC devices and systems: the UV exposure condition. The impact of UV exposure wavelength, dosage, uniformity, and photo-initiator are investigated. We demonstrate that by optimizing the UV exposure condition, we can reduce the hysteresis of PS-BPLC and improve its long term stability
Recommended from our members
End-to-end 3D video communication over heterogeneous networks
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.Three-dimensional technology, more commonly referred to as 3D technology, has revolutionised many fields including entertainment, medicine, and communications to name a few. In addition to 3D films, games, and sports channels, 3D perception has made tele-medicine a reality. By the year 2015, 30% of the all HD panels at home will be 3D enabled, predicted by consumer electronics manufacturers. Stereoscopic cameras, a comparatively mature technology compared to other 3D systems, are now being used by ordinary citizens to produce 3D content and share at a click of a button just like they do with the 2D counterparts via sites like YouTube. But technical challenges still exist, including with autostereoscopic multiview displays. 3D content requires many complex considerations--including how to represent it, and deciphering what is the best compression format--when considering transmission or storage, because of its increased amount of data. Any decision must be taken in the light of the available bandwidth or storage capacity, quality and user expectations. Free viewpoint navigation also remains partly unsolved. The most pressing issue getting in the way of widespread uptake of consumer 3D systems is the ability to deliver 3D content to heterogeneous consumer displays over the heterogeneous networks. Optimising 3D video communication solutions must consider the entire pipeline, starting with optimisation at the video source to the end display and transmission optimisation. Multi-view offers the most compelling solution for 3D videos with motion parallax and freedom from wearing headgear for 3D video perception. Optimising multi-view video for delivery and display could increase the demand for true 3D in the consumer market. This thesis focuses on an end-to-end quality optimisation in 3D video communication/transmission, offering solutions for optimisation at the compression, transmission, and decoder levels.Brunel University - Isambard Research Scholarshi
Light field image processing: an overview
Light field imaging has emerged as a technology allowing to capture richer visual information from our world. As opposed to traditional photography, which captures a 2D projection of the light in the scene integrating the angular domain, light fields collect radiance from rays in all directions, demultiplexing the angular information lost in conventional photography. On the one hand, this higher dimensional representation of visual data offers powerful capabilities for scene understanding, and substantially improves the performance of traditional computer vision problems such as depth sensing, post-capture refocusing, segmentation, video stabilization, material classification, etc. On the other hand, the high-dimensionality of light fields also brings up new challenges in terms of data capture, data compression, content editing, and display. Taking these two elements together, research in light field image processing has become increasingly popular in the computer vision, computer graphics, and signal processing communities. In this paper, we present a comprehensive overview and discussion of research in this field over the past 20 years. We focus on all aspects of light field image processing, including basic light field representation and theory, acquisition, super-resolution, depth estimation, compression, editing, processing algorithms for light field display, and computer vision applications of light field data