444 research outputs found

    An Operationally Based Vision Assessment Simulator for Domes

    Get PDF
    The Operational Based Vision Assessment (OBVA) simulator was designed and built by NASA and the United States Air Force (USAF) to provide the Air Force School of Aerospace Medicine (USAFSAM) with a scientific testing laboratory to study human vision and testing standards in an operationally relevant environment. This paper describes the general design objectives and implementation characteristics of the simulator visual system being created to meet these requirements. A key design objective for the OBVA research simulator is to develop a real-time computer image generator (IG) and display subsystem that can display and update at 120 frame s per second (design target), or at a minimum, 60 frames per second, with minimal transport delay using commercial off-the-shelf (COTS) technology. There are three key parts of the OBVA simulator that are described in this paper: i) the real-time computer image generator, ii) the various COTS technology used to construct the simulator, and iii) the spherical dome display and real-time distortion correction subsystem. We describe the various issues, possible COTS solutions, and remaining problem areas identified by NASA and the USAF while designing and building the simulator for future vision research. We also describe the critically important relationship of the physical display components including distortion correction for the dome consistent with an objective of minimizing latency in the system. The performance of the automatic calibration system used in the dome is also described. Various recommendations for possible future implementations shall also be discussed

    Perception of Color Break-Up

    Get PDF
    Hintergrund. Ein farbverfälschender Bildfehler namens Color Break-Up (CBU) wurde untersucht. Störende CBU-Effekte treten auf, wenn Augenbewegungen (z.B. Folgebewegungen oder Sakkaden) während der Content-Wiedergabe über sogenannte Field-Sequential Color (FSC) Displays oder Projektoren ausgeführt werden. Die Ursache für das Auftreten des CBU-Effektes ist die sequenzielle Anzeige der Primärfarben über das FSC-System. Methoden. Ein kombiniertes Design aus empirischer Forschung und theoretischer Modellierung wurde angewendet. Mittels empirischer Studien wurde der Einfluss von hardware-, content- und betrachterbasierten Faktoren auf die CBU-Wahrnehmung der Stichprobe untersucht. Hierzu wurden zunächst Sehleistung (u. a. Farbsehen), Kurzzeitzustand (u. a. Aufmerksamkeit) und Persönlichkeitsmerkmale (u. a. Technikaffinität) der Stichprobe erfasst. Anschließend wurden die Teilnehmenden gebeten, die wahrgenommene CBU-Intensität verschiedener Videosequenzen zu bewerten. Die Sequenzen wurden mit einem FSC-Projektor wiedergegeben. Das verwendete Setup ermöglichte die Untersuchung folgender Variablen: die Größe (1.0 bis 6.0°) und Leuchtdichte (10.0 bis 157.0 cd/m2) des CBU-provozierenden Contents, das Augenbewegungsmuster des Teilnehmenden (Geschwindigkeit der Folgebewegung: 18.0 bis 54.0 °/s; Amplitude der Sakkade: 3.6 bis 28.2°), die Position der Netzhautstimulation (0.0 bis 50.0°) und die Bildrate des Projektors (30.0 bis 420.0 Hz). Korrelationen zwischen den unabhängigen Variablen und der subjektiven CBU-Wahrnehmung wurden getestet. Das ergänzend entwickelte Modell prognostiziert die CBU-Wahrnehmung eines Betrachters auf theoretische Weise. Das Modell rekonstruiert die Intensitäts- und Farbeigenschaften von CBU-Effekten zunächst grafisch. Anschließend wird die visuelle CBU-Rekonstruktion zu repräsentativen Modellindizes komprimiert, um das modellierte Szenario mit einem handhabbaren Satz von Metriken zu quantifizieren. Die Modellergebnisse wurden abschließend mit den empirischen Daten verglichen. Ergebnisse. Die hohe interindividuelle CBU-Variabilität innerhalb der Stichprobe lässt sich nicht durch die Sehleistung, den Kurzzeitzustand oder die Persönlichkeitsmerkmale eines Teilnehmenden erklären. Eindeutig verstärkende Bedingungen der CBU-Wahrnehmung sind: (1) eine foveale Position des CBU-Stimulus, (2) eine reduzierte Stimulusgröße während Sakkaden, (3) eine hohe Bewegungsgeschwindigkeit des Auges und (4) eine niedrige Bildrate des Projektors (Korrelation durch Exponentialfunktion beschreibbar, r2 > .93). Die Leuchtdichte des Stimulus wirkt sich nur geringfügig auf die CBU-Wahrnehmung aus. Generell hilft das Modell, die grundlegenden Prozesse der CBU-Genese zu verstehen, den Einfluss von CBU-Determinanten zu untersuchen und ein Klassifizierungsschema für verschiedene CBU-Varianten zu erstellen. Das Modell prognostiziert die empirischen Daten innerhalb der angegebenen Toleranzbereiche. Schlussfolgerungen. Die Studienergebnisse ermöglichen die Festlegung von Bildraten und Eigenschaften des CBU-provozierenden Content (Größe und Position), die das Überschreiten vordefinierter, störender CBU-Grenzwerte vermeiden. Die abgeleiteten Hardwareanforderungen und Content-Empfehlungen ermöglichen ein praxisnahes und evidenzbasiertes CBU-Management. Für die Vorhersage von CBU kann die Modellgenauigkeit weiter verbessert werden, indem Merkmale der menschlichen Wahrnehmung berücksichtigt werden, z.B. die exzentrizitätsabhängige Netzhautempfindlichkeit oder Änderungen der visuellen Wahrnehmung bei unterschiedlichen Arten von Augenbewegungen. Zur Modellierung dieser Merkmale können teilnehmerbezogene Daten der empirischen Forschung herangezogen werden.Background. A color-distorting artifact called Color Break-Up (CBU) has been investigated. Disturbing CBU effects occur when eye movements (e.g., pursuits or saccades) are performed during the presentation of content on Field-Sequential Color (FSC) display or projection systems where the primary colors are displayed sequentially rather than simultaneously. Methods. A mixed design of empirical research and theoretical modeling was used to address the main research questions. Conducted studies evaluated the impact of hardware-based, content-based, and viewer-based factors on the sample’s CBU perception. In a first step, visual performance parameters (e.g., color vision), short-term state (e.g., attention level), and long-term personality traits (e.g., affinity to technology) of the sample were recorded. Participants were then asked to rate the perceived CBU intensity for different video sequences presented by a FSC-based projector. The applied setup allowed the size of the CBU-provoking content (1.0 to 6.0°), its luminance level (10.0 to 157.0 cd/m2), the participant’s eye movement pattern (pursuits: 18.0 to 54.0 deg/s; saccadic amplitudes: 3.6 to 28.2°), the position of retinal stimulation (0.0 to 50.0°), and the projector’s frame rate (30.0 to 420.0 Hz) to be varied. Correlations between independent variables and subjective CBU perception were tested. In contrast, the developed model predicts a viewer’s CBU perception on an objective basis. The model graphically reconstructs the intensity and color characteristics of CBU effects. The visual CBU reconstruction is then compressed into representative model indices to quantify the modeled scenario with a manageable set of metrics. Finally, the model output was compared to the empirical data. Results. High interindividual CBU variability within the sample cannot be explained by a participant’s visual performance, short-term state or long-term personality traits. Conditions that distinctly elevate the participant’s CBU perception are (1) a foveal stimulus position on the retina, (2) a small-sized stimulus during saccades, (3) a high eye movement velocity, and (4) a low frame rate of the projector (correlation expressed by exponential function, r2 > .93). The stimulus luminance, however, only slightly affects CBU perception. In general, the model helps to understand the fundamental processes of CBU genesis, to investigate the impact of CBU determinants, and to establish a classification scheme for different CBU variants. The model adequately predicts the empirical data within the specified tolerance ranges. Conclusions. The study results allow the determination of frame rates and content characteristics (size and position) that avoid exceeding predefined annoyance thresholds for CBU perception. The derived hardware requirements and content recommendations enable practical and evidence-based CBU management. For CBU prediction, model accuracy can be further improved by considering features of human perception, e.g., eccentricity-dependent retinal sensitivity or changes in visual perception with different types of eye movements. Participant-based data from the empirical research can be used to model these features

    The impact of enhanced projector display on the responses of people to a violent scenario in immersive virtual reality

    Get PDF
    This paper describes the impact of display resolution and luminance on the responses of participants in a behavioral study that used a projection-based Immersive Virtual Reality System. The scenario was a virtual bar where participants witnessed a violent attack of one person on another due to an argument about support for a soccer club. The major response variable was the number of interventions made by participants. The study was between-groups with 10 participants in two groups pre-upgrade and post-upgrade, both in the same 4-screen Cave-like system. However, the post-upgrade group experienced the scenario with projectors that had a significantly higher level of resolution and luminance than those experienced by the pre-upgrade group. The results show that, other things being equal, the number of both verbal and physical interventions was greater amongst those in the post-upgrade group compared to the pre-upgrade group

    Stereoscopic high dynamic range imaging

    Get PDF
    Two modern technologies show promise to dramatically increase immersion in virtual environments. Stereoscopic imaging captures two images representing the views of both eyes and allows for better depth perception. High dynamic range (HDR) imaging accurately represents real world lighting as opposed to traditional low dynamic range (LDR) imaging. HDR provides a better contrast and more natural looking scenes. The combination of the two technologies in order to gain advantages of both has been, until now, mostly unexplored due to the current limitations in the imaging pipeline. This thesis reviews both fields, proposes stereoscopic high dynamic range (SHDR) imaging pipeline outlining the challenges that need to be resolved to enable SHDR and focuses on capture and compression aspects of that pipeline. The problems of capturing SHDR images that would potentially require two HDR cameras and introduce ghosting, are mitigated by capturing an HDR and LDR pair and using it to generate SHDR images. A detailed user study compared four different methods of generating SHDR images. Results demonstrated that one of the methods may produce images perceptually indistinguishable from the ground truth. Insights obtained while developing static image operators guided the design of SHDR video techniques. Three methods for generating SHDR video from an HDR-LDR video pair are proposed and compared to the ground truth SHDR videos. Results showed little overall error and identified a method with the least error. Once captured, SHDR content needs to be efficiently compressed. Five SHDR compression methods that are backward compatible are presented. The proposed methods can encode SHDR content to little more than that of a traditional single LDR image (18% larger for one method) and the backward compatibility property encourages early adoption of the format. The work presented in this thesis has introduced and advanced capture and compression methods for the adoption of SHDR imaging. In general, this research paves the way for a novel field of SHDR imaging which should lead to improved and more realistic representation of captured scenes

    Synchronized Illumination Modulation for Digital Video Compositing

    Get PDF
    Informationsaustausch ist eines der Grundbedürfnisse der Menschen. Während früher dazu Wandmalereien,Handschrift, Buchdruck und Malerei eingesetzt wurden, begann man später, Bildfolgen zu erstellen, die als sogenanntes ”Daumenkino” den Eindruck einer Animation vermitteln. Diese wurden schnell durch den Einsatz rotierender Bildscheiben, auf denen mit Hilfe von Schlitzblenden, Spiegeln oder Optiken eine Animation sichtbar wurde, automatisiert – mit sogenannten Phenakistiskopen,Zoetropen oder Praxinoskopen. Mit der Erfindung der Fotografie begannen in der zweiten Hälfte des 19. Jahrhunderts die ersten Wissenschaftler wie Eadweard Muybridge, Etienne-Jules Marey und Ottomar Anschütz, Serienbildaufnahmen zu erstellen und diese dann in schneller Abfolge, als Film, abzuspielen. Mit dem Beginn der Filmproduktion wurden auch die ersten Versuche unternommen, mit Hilfe dieser neuen Technik spezielle visuelle Effekte zu generieren, um damit die Immersion der Bewegtbildproduktionen weiter zu erhöhen. Während diese Effekte in der analogen Phase der Filmproduktion bis in die achtziger Jahre des 20.Jahrhunderts recht beschränkt und sehr aufwendig mit einem enormen manuellen Arbeitsaufwand erzeugt werden mussten, gewannen sie mit der sich rapide beschleunigenden Entwicklung der Halbleitertechnologie und der daraus resultierenden vereinfachten digitalen Bearbeitung immer mehr an Bedeutung. Die enormen Möglichkeiten, die mit der verlustlosen Nachbearbeitung in Kombination mit fotorealistischen, dreidimensionalen Renderings entstanden, führten dazu, dass nahezu alle heute produzierten Filme eine Vielfalt an digitalen Videokompositionseffekten beinhalten. ...Besides home entertainment and business presentations, video projectors are powerful tools for modulating images spatially as well as temporally. The re-evolving need for stereoscopic displays increases the demand for low-latency projectors and recent advances in LED technology also offer high modulation frequencies. Combining such high-frequency illumination modules with synchronized, fast cameras, makes it possible to develop specialized high-speed illumination systems for visual effects production. In this thesis we present different systems for using spatially as well as temporally modulated illumination in combination with a synchronized camera to simplify the requirements of standard digital video composition techniques for film and television productions and to offer new possibilities for visual effects generation. After an overview of the basic terminology and a summary of related methods, we discuss and give examples of how modulated light can be applied to a scene recording context to enable a variety of effects which cannot be realized using standard methods, such as virtual studio technology or chroma keying. We propose using high-frequency, synchronized illumination which, in addition to providing illumination, is modulated in terms of intensity and wavelength to encode technical information for visual effects generation. This is carried out in such a way that the technical components do not influence the final composite and are also not visible to observers on the film set. Using this approach we present a real-time flash keying system for the generation of perspectively correct augmented composites by projecting imperceptible markers for optical camera tracking. Furthermore, we present a system which enables the generation of various digital video compositing effects outside of completely controlled studio environments, such as virtual studios. A third temporal keying system is presented that aims to overcome the constraints of traditional chroma keying in terms of color spill and color dependency. ..

    Painting an apple with an apple : a tangible tabletop interface for painting with physical objects

    Get PDF
    We introduce UnicrePaint, a digital painting system that allows the user to paint with physical objects by acquiring three parameters from the interacting object: the form, the color pattern and the contact pressure. The design of the system is motivated by a hypothesis that integrating direct input from physical objects with digital painting offers unique creative experiences to the user. A major technical challenge in implementing UnicrePaint is to resolve the conflict between input and output, i.e., to be able to capture the form and color pattern of contacting objects from a camera, while at the same time be able to present the captured data using a projector. We present a solution for this problem. We implemented a prototype and carried out a user study with fifteen novice users. Additionally, five professional users with art-related backgrounds participated in a user study to obtain insights into how professionals might view our system. The results show that UnicrePaint offers unique experiences with painting in a creative manner. Also, its potentials beyond mere artwork are suggested

    Content creation for seamless augmented experiences with projection mapping

    Get PDF
    This dissertation explores systems and methods for creating projection mapping content that seamlessly merges virtual and physical. Most virtual reality and augmented reality technologies rely on screens for display and interaction, where a mobile device or head mounted display mediates the user's experience. In contrast, projection mapping uses off-the-shelf video projectors to augment the appearance of physical objects, and with projection mapping there is no screen to mediate the experience. The physical world simply becomes the display. Projection mapping can provide users with a seamless augmented experience, where virtual and physical become indistinguishable in an apparently unmediated way. Projection mapping is an old concept dating to Disney's 1969 Haunted Mansion. The core technical foundations were laid back in 1999 with UNC's Office of the Future and Shader Lamps projects. Since then, projectors have gotten brighter, higher resolution, and drastically decreased in price. Yet projection mapping has not crossed the chasm into mainstream use. The largest remaining challenge for projection mapping is that content creation is very difficult and time consuming. Content for projection mapping is still created via a tedious manual process by warping a 2D video file onto a 3D physical object using existing tools (e.g. Adobe Photoshop) which are not made for defining animated interactive effects on 3D object surfaces. With existing tools, content must be created for each specific display object, and cannot be re-used across experiences. For each object the artist wants to animate, the artist must manually create a custom texture for that specific object, and warp the texture to the physical object. This limits projection mapped experiences to controlled environments and static scenes. If the artist wants to project onto a different object from the original, they must start from scratch creating custom content for that object. This manual content creation process is time consuming, expensive and doesn't scale. This thesis explores new methods for creating projection mapping content. Our goal is to make projection mapping easier, cheaper and more scalable. We explore methods for adaptive projection mapping, which enables artists to create content once, and that content adapts based on the color and geometry of the display surface. Content can be created once, and re-used on any surface. This thesis is composed of three proof-of-concept prototypes, exploring new methods for content creation for projection mapping. IllumiRoom expands video game content beyond the television screen and into the physical world using a standard video projector to surround a television with projected light. IllumiRoom works in any living room, the projected content dynamically adapts based on the color and geometry of the room. RoomAlive expands on this idea, using multiple projectors to cover an entire living room in input/output pixels and dynamically adapts gaming experiences to fill an entire room. Finally, Projectibles focuses on the physical aspect of projection mapping. Projectibles optimizes the display surface color to increase the contrast and resolution of the overall experience, enabling artists to design the physical object along with the virtual content. The proof-of-concept prototypes presented in this thesis are aimed at the not-to-distant future. The projects in this thesis are not theoretical concepts, but fully working prototype systems that demonstrate the practicality of projection mapping to create immersive experiences. It is the sincere hope of the author that these experiences quickly move of the lab and into the real world

    Intensity Blending of Computer Image Generation-Based Displays

    Get PDF
    State-of-the-art combat simulators require a 360 degree field of view, allowing the pilot and radar intercept officer to have the same visibility in the simulator that they would experience in the aircraft. The sky/earth display must be computer - generated and displayed with a minimum of two channels to provide the most realistic display possible. The two channels of display come together in the dome, forming an equator, that must be as indiscernible to the aircrew as possible. To accomplish this, an algorithm has been developed for controlling the video output which makes the two separate channel displays appear as one continuous 360 degree display
    • …
    corecore