17 research outputs found
Crosstalk measurement and mitigation for autostereoscopic displays
International audienceIn this paper we address the problem of crosstalk reduction for autostereoscopic displays. Crosstalk refers to the perception of one or more unwanted views in addition to the desired one. Specifically, the proposed approach consists of three different stages: a crosstalk measurement stage, where the crosstalk is modeled, a filter design stage, based on the results obtained out of the measurements, to mitigate the crosstalk effect, and a validation test carried out by means of subjective measurements performed in a controlled environment as recommended in ITU BT 500-11. Our analysis, synthesis, and subjective experiments are performed on the Alioscopy® display, which is a lenticular multiview display
Perceptually Optimized Visualization on Autostereoscopic 3D Displays
The family of displays, which aims to visualize a 3D scene with realistic depth, are known as "3D displays". Due to technical limitations and design decisions, such displays create visible distortions, which are interpreted by the human vision as artefacts. In absence of visual reference (e.g. the original scene is not available for comparison) one can improve the perceived quality of the representations by making the distortions less visible. This thesis proposes a number of signal processing techniques for decreasing the visibility of artefacts on 3D displays.
The visual perception of depth is discussed, and the properties (depth cues) of a scene which the brain uses for assessing an image in 3D are identified. Following the physiology of vision, a taxonomy of 3D artefacts is proposed. The taxonomy classifies the artefacts based on their origin and on the way they are interpreted by the human visual system.
The principles of operation of the most popular types of 3D displays are explained. Based on the display operation principles, 3D displays are modelled as a signal processing channel. The model is used to explain the process of introducing distortions. It also allows one to identify which optical properties of a display are most relevant to the creation of artefacts. A set of optical properties for dual-view and multiview 3D displays are identified, and a methodology for measuring them is introduced. The measurement methodology allows one to derive the angular visibility and crosstalk of each display element without the need for precision measurement equipment. Based on the measurements, a methodology for creating a quality profile of 3D displays is proposed. The quality profile can be either simulated using the angular brightness function or directly measured from a series of photographs. A comparative study introducing the measurement results on the visual quality and position of the sweet-spots of eleven 3D displays of different types is presented. Knowing the sweet-spot position and the quality profile allows for easy comparison between 3D displays. The shape and size of the passband allows depth and textures of a 3D content to be optimized for a given 3D display.
Based on knowledge of 3D artefact visibility and an understanding of distortions introduced by 3D displays, a number of signal processing techniques for artefact mitigation are created. A methodology for creating anti-aliasing filters for 3D displays is proposed. For multiview displays, the methodology is extended towards so-called passband optimization which addresses Moiré, fixed-pattern-noise and ghosting artefacts, which are characteristic for such displays. Additionally, design of tuneable anti-aliasing filters is presented, along with a framework which allows the user to select the so-called 3d sharpness parameter according to his or her preferences. Finally, a set of real-time algorithms for view-point-based optimization are presented. These algorithms require active user-tracking, which is implemented as a combination of face and eye-tracking. Once the observer position is known, the image on a stereoscopic display is optimised for the derived observation angle and distance. For multiview displays, the combination of precise light re-direction and less-precise face-tracking is used for extending the head parallax. For some user-tracking algorithms, implementation details are given, regarding execution of the algorithm on a mobile device or on desktop computer with graphical accelerator
Roadmap on 3D integral imaging: Sensing, processing, and display
This Roadmap article on three-dimensional integral imaging provides an overview of some of the research activities in the field of integral imaging. The article discusses various aspects of the field including sensing of 3D scenes, processing of captured information, and 3D display and visualization of information. The paper consists of a series of 15 sections from the experts presenting various aspects of the field on sensing, processing, displays, augmented reality, microscopy, object recognition, and other applications. Each section represents the vision of its author to describe the progress, potential, vision, and challenging issues in this field
Stereoscopic 3D user interfaces : exploring the potentials and risks of 3D displays in cars
During recent years, rapid advancements in stereoscopic digital display technology has led to acceptance of high-quality 3D in the entertainment sector and even created enthusiasm towards the technology. The advent of autostereoscopic displays (i.e., glasses-free 3D) allows for introducing 3D technology into other application domains, including but not limited to mobile devices, public displays, and automotive user interfaces - the latter of which is at the focus of this work. Prior research demonstrates that 3D improves the visualization of complex structures and augments virtual environments. We envision its use to enhance the in-car user interface by structuring the presented information via depth. Thus, content that requires attention can be shown close to the user and distances, for example to other traffic participants, gain a direct mapping in 3D space
Recommended from our members
Fundamentals of phase-only liquid crystal on silicon (LCOS) devices
This paper describes the fundamentals of phase-only liquid crystal on silicon (LCOS) technology, which have not been previously discussed in detail. This technology is widely utilized in high efficiency applications for real-time holography and diffractive optics. The paper begins with a brief introduction on the developmental trajectory of phase-only LCOS technology, followed by the correct selection of liquid crystal (LC) materials and corresponding electro-optic effects in such devices. Attention is focused on the essential requirements of the physical aspects of the LC layer as well as the indispensable parameters for the response time of the device. Furthermore, the basic functionalities embedded in the complementary metal oxide semiconductor (CMOS) silicon backplane for phase-only LCOS devices are illustrated, including two typical addressing schemes. Finally, the application of phase-only LCOS devices in real-time holography will be introduced in association with the use of cutting-edge computer-generated holograms.This is the final version. It has been published by NPG in Light: Science & Applications here: http://www.nature.com/lsa/journal/v3/n10/full/lsa201494a.html
Remote Visual Observation of Real Places Through Virtual Reality Headsets
Virtual Reality has always represented a fascinating yet powerful opportunity that has attracted studies and technology developments, especially since the latest release on the market of powerful high-resolution and wide field-of-view VR headsets. While the great potential of such VR systems is common and accepted knowledge, issues remain related to how to design systems and setups capable of fully exploiting the latest hardware advances.
The aim of the proposed research is to study and understand how to increase the perceived level of realism and sense of presence when remotely observing real places through VR headset displays. Hence, to produce a set of guidelines that give directions to system designers about how to optimize the display-camera setup to enhance performance, focusing on remote visual observation of real places. The outcome of this investigation represents unique knowledge that is believed to be very beneficial for better VR headset designs towards improved remote observation systems.
To achieve the proposed goal, this thesis presents a thorough investigation of existing literature and previous researches, which is carried out systematically to identify the most important factors ruling realism, depth perception, comfort, and sense of presence in VR headset observation. Once identified, these factors are further discussed and assessed through a series of experiments and usability studies, based on a predefined set of research questions.
More specifically, the role of familiarity with the observed place, the role of the environment characteristics shown to the viewer, and the role of the display used for the remote observation of the virtual environment are further investigated. To gain more insights, two usability studies are proposed with the aim of defining guidelines and best practices.
The main outcomes from the two studies demonstrate that test users can experience an enhanced realistic observation when natural features, higher resolution displays, natural illumination, and high image contrast are used in Mobile VR. In terms of comfort, simple scene layouts and relaxing environments are considered ideal to reduce visual fatigue and eye strain. Furthermore, sense of presence increases when observed environments induce strong emotions, and depth perception improves in VR when several monocular cues such as lights and shadows are combined with binocular depth cues.
Based on these results, this investigation then presents a focused evaluation on the outcomes and introduces an innovative eye-adapted High Dynamic Range (HDR) approach, which the author believes to be of great improvement in the context of remote observation when combined with eye-tracked VR headsets. Within this purpose, a third user study is proposed to compare static HDR and eye-adapted HDR observation in VR, to assess that the latter can improve realism, depth perception, sense of presence, and in certain cases even comfort. Results from this last study confirmed the author expectations, proving that eye-adapted HDR and eye tracking should be used to achieve best visual performances for remote observation in modern VR systems
Simultaneous 2D and 3D Video Rendering
The representation of stereoscopic video on a display is typically enabled either by using active shutter or polarizing viewing glasses in the television sets and displays available for end users. It is likely that in some usage situations some viewers do not wear viewing glasses at all times and hence it would be desirable if the stereoscopic video content could be tuned in the rendering device in such a manner that it could be simultaneously watched with and without viewing glasses with an acceptable quality. In this thesis, a novel video rendering technique is proposed and implemented in the post-processing stage which enables good quality both stereoscopic and traditional 2D video perception of the same content. This has been accomplished by manipulating of one view in the stereoscopic video by making it more similar to the other view in order to reduce the ghosting artifact perceived when the content is watched without viewing glasses while stereoscopic perception is maintained. The proposed technique includes three steps: disparity selection, contrast adjustment, and low-pass-filtering. Through an extensive series of subjective tests, the proposed approach has been evaluated to show that stereoscopic content can be viewed without glasses with an acceptable quality. The proposed methods resulted in a lower bitrate stereoscopic video stream requiring a smaller bandwidth for broadcasting
Open Profiling of Quality: A Mixed Methods Research Approach for Audiovisual Quality Evaluations
Den Anforderungen der Konsumenten gerecht zu werden und ihnen eine immer
besser werdende Quality of Experience zu bieten, ist eine der großen
Herausforderungen jeder Neuentwicklung im Bereich der Multimediasysteme.
Doch proportional zur technischen Komplexität neuer Systeme, in denen
Komponenten unterschiedlicher Technologien zu neuen System wie zum Beispiel
mobilem 3D-Fernsehen verschmolzen werden, steigt auch die Frage, wie eine
optimierte Quality of Experience eigentlich zu erreichen ist. Daher werden
seit langer Zeit Nutzertests zur subjektiven Qualitätsbewertung
durchgeführt. Deren Ziel über den gesamten Entwcklungsprozesses ist es, die
kritischen Komponenten des Systems mit so wenig wie möglich wahrnehmbarem
Einfluss auf diewahrgenommene Qualität des Nutzers zu optimieren. Bereits
seit den 1970er Jahren werden hierfür Leitfäden verschiedener
Standardisierungsgremien zur Verfügung gestellt, in denen unterschiedliche
Evaluationsmethoden definiert sind, um die wahrgenommene Gesamtqualität des
Systems mit Hilfe von Skalen quantitativ evaluieren zu können. Aktuelle
Ansätze erweitern diese klassische Methoden um Sichtweise, die über die
klassische Evaluation hedonistischer Gesamtqualität hinausgehen, um das
Wissen über individuell zugrundeliegende Qualitätsfaktoren zu erweitern.Die
vorliegende Dissertation verfolgt dabei zwei Ziele. Zum einen soll eine
audiovisuelle Evaluationsmethode entwickelt werden, die eine kombinierte
Analyse quantitativer und qualitativer Daten ermöglicht, um eine
Verknüpfung hedonistischer Qualität und zugrundeliegender Qualitätsfaktoren
zu ermöglichen. Weiter soll diese Methode innerhalb des Gebiets der mobiler
3DTV-Systeme erprobt und validiert werden.Open Profiling of Quality (OPQ)
als Evaluationsmethode kombiniert quantitative Evaluation wahrgenommener
Gesamtqualität und deskriptive, sensorische Analyse zur Erhebung
individueller Qualitätsfaktoren. Die Methode ist für Erhebungen mit naiven
Probanden geeignet. OPQ wurde unter besonderer Beachtung von Validität und
Reliabilität in einem konstruktivem Ansatz entwickelt und in einer Folge
von Studien während der Entwicklung eines mobilem 3DTV-Systems mit über 300
Probanden angewendet. Die Ergebnisse dieser Studien unterstreichen die sich
ergänzenden Ergebnisse quantitativer und sensorischer Analysen.Neben der
Entwicklung von OPQ werden in der vorliegenden Arbeit weitere Ansätze
sensorischer Analyse präsentiert und miteinander verglichen. Gerade dieser
Vergleich ist ein wichtiger Bestandteil der Validierung der OPQ-Methode. Um
die Stärken und Schwächen jeder Methode ganzheitlich erfassen und
vergleichen zu können, wurde hierfür ein Methodenvergleichsmodell
entwickelt und operationalisiert, das den methodischen Beitrag der Arbeit
vervollständigtTo meet the requirements of consumers and to provide them with a greater
quality of experience than existing systems do is a key issue for the
success of modern multimedia systems. However, the question about an
optimized quality of experience becomes more and more complex as
technological systems are evolving and several systems are merged into new
ones, e.g. systems for mobile 3D television and video. To be able to
optimize critical components of a system under development with as little
perceptual errors as possible, user studies are conducted throughout the
whole process. A variety of research methods for different purposes have
been provided by standardization bodies since the 1970s. These methods
allow researchers to evaluate the hedonic excellence of a set of test
stimuli. However, a broader view to quality has been taken recently to be
able to evaluate quality beyond its hedonic excellence to obtain a greater
knowledge about perceived quality and its subjective quality factors that
impact on the user.The goal of this thesis is twofold. The primary goal is
the development of a validated mixed methods research approach for
audiovisual quality evaluations. The method shall allow collecting
quantitative and descriptive data during the experiment to combine
evaluation of hedonic excellence and the elicitation of underlying
subjective quality factors. The second goal is the application of the
developed method within a series of studies in the domain of mobile 3D
video and television to show its applicability.Open Profiling of Quality
(OPQ) is a mixed-methods research approach which combines a quantitative,
psychoperceptual evaluation of hedonic excellence and a descriptive sensory
analysis of underlying quality factors based on naive participants'
individual vocabulary. This combination allows defining the excellence of
overall quality, understanding the characteristics of quality perception,
and, eventually, constructing a link between preferences and quality
attributes. The method was developed under constructive research with
respect to validity and reliability of test results. A series of quality
evaluation studies with more than 300 test participants was conducted along
different critical components of a system for optimized mobile 3DTV content
delivery over DVB-H.The results complemented each other, and, even more
importantly, quantitative quality preferences were explained by sensory
descriptions in all studies. Beyond the development of OPQ, the thesis
proposes further research approaches, e.g. a conventional profiling in
which OPQ's individual vacobulary is substituted by a fixed set of Quality
ofExperience components or Descriptive Sorted Napping which combines a
sorting task and a short post-task interview. All approaches are compared
to Open ProVling of Quality at the end of the thesis. To be able to
holistically contrast strengths and weaknesses of each method, a comparison
model for audiovisual evaluation methods was developed and a Vrst
conceptual operationalization of the model was applied in the comparison