8 research outputs found

    Head Tracked Multi User Autostereoscopic 3D Display Investigations

    Get PDF
    The research covered in this thesis encompasses a consideration of 3D television requirements and a survey of stereoscopic and autostereoscopic methods. This confirms that although there is a lot of activity in this area, very little of this work could be considered suitable for television. The principle of operation, design of the components of the optical system and evaluation of two EU-funded (MUTED & HELIUM3D projects) glasses-free (autostereoscopic) displays is described. Four iterations of the display were built in MUTED, with the results of the first used in designing the second, third and fourth versions. The first three versions of the display use two-49 element arrays, one for the left eye and one for the right. A pattern of spots is projected onto the back of the arrays and these are converted into a series of collimated beams that form exit pupils after passing through the LCD. An exit pupil is a region in the viewing field where either a left or a right image is seen across the complete area of the screen; the positions of these are controlled by a multi-user head tracker. A laser projector was used in the first two versions and, although this projector operated on holographic principles in order to obtain the spot pattern required to produce the exit pupils, it should be noted that images seen by the viewers are not produced holographically so the overall display cannot be described as holographic. In the third version, the laser projector is replaced with a conventional LCOS projector to address the stability and brightness issues discovered in the second version. In 2009, true 120Hz displays became available; this led to the development of a fourth version of the MUTED display that uses 120Hz projector and LCD to overcome the problems of projector instability, produces full-resolution images and simplifies the display hardware. HELIUM3D: A multi-user autostereoscopic display based on laser scanning is also described in this thesis. This display also operates by providing head-tracked exit pupils. It incorporates a red, green and blue (RGB) laser illumination source that illuminates a light engine. Light directions are controlled by a spatial light modulator and are directed to the users’ eyes via a front screen assembly incorporating a novel Gabor superlens. In this work is described that covered the development of demonstrators that showed the principle of temporal multiplexing and a version of the final display that had limited functionality; the reason for this was the delivery of components required for a display with full functionality

    Head tracked retroreflecting 3D display

    Get PDF
    In this paper, we describe a single-user glasses-free (autostereoscopic) 3D display where images from a pair of picoprojectors are projected on to a retroreflecting screen. Real images of the projector lenses formed at the viewer's eyes produce exit pupils that follow the eye positions by the projectors moving laterally under the control of a head tracker. This provides the viewer with a comfortable degree of head movement. The retroreflecting screen, display hardware, infrared head tracker, and means of stabilizing the image position on the screen are explained. The performance of the display in terms of crosstalk, resolution, image distortion, and other parameters is described. Finally, applications of this display type are suggested

    Multimode optical waveguides and lightguides for backplane interconnection and laser illuminated display systems

    Get PDF
    The aim of the research in this thesis was to design, model, analyse and experimentally test multimode optical waveguides and lightguides for manipulating infrared light for optical backplane interconnections and visible light for laser illuminated display systems. Optical Input/Output Coupling loss at the entry and exit of polymer waveguides depends on optical scattering due to end facet roughness. The input/output coupling loss was measured for different end facet roughness magnitudes and the waveguide surface profiles due to different cutting methods (dicing saw and three milling routers) were compared. The effect of the number of cutting edges on the router, the rotation rate and translation (cutting) speed of the milling routers on the waveguide end facet roughness was established. A further new method for reducing the end facet roughness and so the coupling loss, by curing a layer of core material at the end of the waveguide to cover the roughness fluctuations, was proposed and successfully demonstrated giving the best results reported to date resulting in an improvement of 2.8 dB, even better than those obtained by use of index matching fluid which is impractical in commercial systems. The insertion loss due to waveguide crossing having various crossing angles was calculated using a beam propagation method and ray tracing simulations and compared to experimental measurements. Differences between the results were resolved leading to an understanding that only low order waveguide modes at no more than 6 degrees to the axis were propagating inside the waveguide. Several different optical designs of multimode waveguide for the light engine of a 3D autostereoscopic laser illuminated display system were proposed. Each design performed the functions of laser beam combining, beam shaping and beam homogenizing and the best method was selected, designed, modelled, tested, and implemented in the system. The waveguide material was inspected using spectroscopy to establish the effect of high power optical density on the material performance showing an increased loss particularly in the shorter wavelengths. The effect of waveguide dimensions on the speckle pattern was investigated experimentally and the speckle contrast was reduced to below the threshold of human perception. Speckle contrast was also recorded for the first time along the axis of the 3D display system and normal to it in the viewing area and the speckle characteristics at each stage were investigated. New algorithms for analysing speckle were used and the perceptual ability of human eyes to detect speckle size and contrast were taken into account to minimise perceived speckle patterns. The effect of the core diameter of optical fibres on the speckle pattern was investigated and it was shown that the speckle spot diameter is reduced by increasing the fibre core diameter. Based on this experiment, it was suggested that speckle reduction is more effective if the optical fibre used in the display system has larger diameter. Therefore, a slab waveguide of 1 mm thickness and 20 m width was used for laser beam combining, homogenising and beam shaping and a uniformity of 84% was achieved with just 75 mm length. The speckle was also completely removed at the output of the waveguid

    Perceived Depth Control in Stereoscopic Cinematography

    Get PDF
    Despite the recent explosion of interest in the stereoscopic 3D (S3D) technology, the ultimate prevailing of the S3D medium is still significantly hindered by adverse effects regarding the S3D viewing discomfort. This thesis attempts to improve the S3D viewing experience by investigating perceived depth control methods in stereoscopic cinematography on desktop 3D displays. The main contributions of this work are: (1) A new method was developed to carry out human factors studies on identifying the practical limits of the 3D Comfort Zone on a given 3D display. Our results suggest that it is necessary for cinematographers to identify the specific limits of 3D Comfort Zone on the target 3D display as different 3D systems have different ranges for the 3D Comfort Zone. (2) A new dynamic depth mapping approach was proposed to improve the depth perception in stereoscopic cinematography. The results of a human-based experiment confirmed its advantages in controlling the perceived depth in viewing 3D motion pictures over the existing depth mapping methods. (3) The practicability of employing the Depth of Field (DoF) blur technique in S3D was also investigated. Our results indicate that applying the DoF blur simulation on stereoscopic content may not improve the S3D viewing experience without the real time information about what the viewer is looking at. Finally, a basic guideline for stereoscopic cinematography was introduced to summarise the new findings of this thesis alongside several well-known key factors in 3D cinematography. It is our assumption that this guideline will be of particular interest not only to 3D filmmaking but also to 3D gaming, sports broadcasting, and TV production

    Perceptually Optimized Visualization on Autostereoscopic 3D Displays

    Get PDF
    The family of displays, which aims to visualize a 3D scene with realistic depth, are known as "3D displays". Due to technical limitations and design decisions, such displays create visible distortions, which are interpreted by the human vision as artefacts. In absence of visual reference (e.g. the original scene is not available for comparison) one can improve the perceived quality of the representations by making the distortions less visible. This thesis proposes a number of signal processing techniques for decreasing the visibility of artefacts on 3D displays. The visual perception of depth is discussed, and the properties (depth cues) of a scene which the brain uses for assessing an image in 3D are identified. Following the physiology of vision, a taxonomy of 3D artefacts is proposed. The taxonomy classifies the artefacts based on their origin and on the way they are interpreted by the human visual system. The principles of operation of the most popular types of 3D displays are explained. Based on the display operation principles, 3D displays are modelled as a signal processing channel. The model is used to explain the process of introducing distortions. It also allows one to identify which optical properties of a display are most relevant to the creation of artefacts. A set of optical properties for dual-view and multiview 3D displays are identified, and a methodology for measuring them is introduced. The measurement methodology allows one to derive the angular visibility and crosstalk of each display element without the need for precision measurement equipment. Based on the measurements, a methodology for creating a quality profile of 3D displays is proposed. The quality profile can be either simulated using the angular brightness function or directly measured from a series of photographs. A comparative study introducing the measurement results on the visual quality and position of the sweet-spots of eleven 3D displays of different types is presented. Knowing the sweet-spot position and the quality profile allows for easy comparison between 3D displays. The shape and size of the passband allows depth and textures of a 3D content to be optimized for a given 3D display. Based on knowledge of 3D artefact visibility and an understanding of distortions introduced by 3D displays, a number of signal processing techniques for artefact mitigation are created. A methodology for creating anti-aliasing filters for 3D displays is proposed. For multiview displays, the methodology is extended towards so-called passband optimization which addresses Moiré, fixed-pattern-noise and ghosting artefacts, which are characteristic for such displays. Additionally, design of tuneable anti-aliasing filters is presented, along with a framework which allows the user to select the so-called 3d sharpness parameter according to his or her preferences. Finally, a set of real-time algorithms for view-point-based optimization are presented. These algorithms require active user-tracking, which is implemented as a combination of face and eye-tracking. Once the observer position is known, the image on a stereoscopic display is optimised for the derived observation angle and distance. For multiview displays, the combination of precise light re-direction and less-precise face-tracking is used for extending the head parallax. For some user-tracking algorithms, implementation details are given, regarding execution of the algorithm on a mobile device or on desktop computer with graphical accelerator

    Quality of Experience in Immersive Video Technologies

    Get PDF
    Over the last decades, several technological revolutions have impacted the television industry, such as the shifts from black & white to color and from standard to high-definition. Nevertheless, further considerable improvements can still be achieved to provide a better multimedia experience, for example with ultra-high-definition, high dynamic range & wide color gamut, or 3D. These so-called immersive technologies aim at providing better, more realistic, and emotionally stronger experiences. To measure quality of experience (QoE), subjective evaluation is the ultimate means since it relies on a pool of human subjects. However, reliable and meaningful results can only be obtained if experiments are properly designed and conducted following a strict methodology. In this thesis, we build a rigorous framework for subjective evaluation of new types of image and video content. We propose different procedures and analysis tools for measuring QoE in immersive technologies. As immersive technologies capture more information than conventional technologies, they have the ability to provide more details, enhanced depth perception, as well as better color, contrast, and brightness. To measure the impact of immersive technologies on the viewersâ QoE, we apply the proposed framework for designing experiments and analyzing collected subjectsâ ratings. We also analyze eye movements to study human visual attention during immersive content playback. Since immersive content carries more information than conventional content, efficient compression algorithms are needed for storage and transmission using existing infrastructures. To determine the required bandwidth for high-quality transmission of immersive content, we use the proposed framework to conduct meticulous evaluations of recent image and video codecs in the context of immersive technologies. Subjective evaluation is time consuming, expensive, and is not always feasible. Consequently, researchers have developed objective metrics to automatically predict quality. To measure the performance of objective metrics in assessing immersive content quality, we perform several in-depth benchmarks of state-of-the-art and commonly used objective metrics. For this aim, we use ground truth quality scores, which are collected under our subjective evaluation framework. To improve QoE, we propose different systems for stereoscopic and autostereoscopic 3D displays in particular. The proposed systems can help reducing the artifacts generated at the visualization stage, which impact picture quality, depth quality, and visual comfort. To demonstrate the effectiveness of these systems, we use the proposed framework to measure viewersâ preference between these systems and standard 2D & 3D modes. In summary, this thesis tackles the problems of measuring, predicting, and improving QoE in immersive technologies. To address these problems, we build a rigorous framework and we apply it through several in-depth investigations. We put essential concepts of multimedia QoE under this framework. These concepts not only are of fundamental nature, but also have shown their impact in very practical applications. In particular, the JPEG, MPEG, and VCEG standardization bodies have adopted these concepts to select technologies that were proposed for standardization and to validate the resulting standards in terms of compression efficiency

    Muted and HELIUM3D autostereoscopic displays.

    No full text
    This paper describes multi-user autostereoscopic displays developed within the European Union-funded MUTED and HELIUM3D projects. These utilize head tracking in order to provide images that are displayed in regions referred to as exit pupils that follow the users' eye positions. In the MUTED displays images are produced on a direct-view liquid crystal display (LCD) with novel optics controlled by the head tracker replacing the conventional backlight. This paper describes the design and construction of the displays along with evaluation results and future developments. The principle of operation, the current status and the multimodal potential of the HELIUM3D display is described

    Laser-based head-tracked 3D display research.

    No full text
    The construction and operation of two laser-based glasses-free 3D (autostereoscopic) displays that have been carried out within the European Union-funded projects MUTED and HELIUM3D is described in this paper. Both use a multi-user head tracker to direct regions viewer's referred to as exit pupils to viewer's eyes. MUTED employs a direct-view LCD whose backlight comprises novel steering optics and in HELIUM3D image information is supplied by a horizontally-scanned fast light valve whose output is controlled by a spatial light modulator (SLM). The principle of operation, construction and results obtained are described
    corecore