7 research outputs found

    Method and apparatus for calibrating a tiled display

    Get PDF
    A display system that can be calibrated and re-calibrated with a minimal amount of manual intervention. To accomplish this, one or more cameras are provided to capture an image of the display screen. The resulting captured image is processed to identify any non-desirable characteristics, including visible artifacts such as seams, bands, rings, etc. Once the non-desirable characteristics are identified, an appropriate transformation function is determined. The transformation function is used to pre-warp the input video signal that is provided to the display such that the non-desirable characteristics are reduced or eliminated from the display. The transformation function preferably compensates for spatial non-uniformity, color non-uniformity, luminance non-uniformity, and other visible artifacts

    A Multi-Projector Calibration Method for Virtual Reality Simulators with Analytically Defined Screens

    Get PDF
    The geometric calibration of projectors is a demanding task, particularly for the industry of virtual reality simulators. Different methods have been developed during the last decades to retrieve the intrinsic and extrinsic parameters of projectors, most of them being based on planar homographies and some requiring an extended calibration process. The aim of our research work is to design a fast and user-friendly method to provide multi-projector calibration on analytically defined screens, where a sample is shown for a virtual reality Formula 1 simulator that has a cylindrical screen. The proposed method results from the combination of surveying, photogrammetry and image processing approaches, and has been designed by considering the spatial restrictions of virtual reality simulators. The method has been validated from a mathematical point of view, and the complete system which is currently installed in a shopping mall in Spain has been tested by different users

    Roomalive: Magical experiences enabled by scalable, adaptive projector-camera units

    Get PDF
    ABSTRACT RoomAlive is a proof-of-concept prototype that transforms any room into an immersive, augmented entertainment experience. Our system enables new interactive projection mapping experiences that dynamically adapts content to any room. Users can touch, shoot, stomp, dodge and steer projected content that seamlessly co-exists with their existing physical environment. The basic building blocks of RoomAlive are projector-depth camera units, which can be combined through a scalable, distributed framework. The projector-depth camera units are individually autocalibrating, self-localizing, and create a unified model of the room with no user intervention. We investigate the design space of gaming experiences that are possible with RoomAlive and explore methods for dynamically mapping content based on room layout and user position. Finally we showcase four experience prototypes that demonstrate the novel interactive experiences that are possible with RoomAlive and discuss the design challenges of adapting any game to any room

    Tele-immersive display with live-streamed video.

    Get PDF
    Tang Wai-Kwan.Thesis (M.Phil.)--Chinese University of Hong Kong, 2001.Includes bibliographical references (leaves 88-95).Abstracts in English and Chinese.Abstract --- p.iAcknowledgement --- p.iiiChapter 1 --- Introduction --- p.1Chapter 1.1 --- Applications --- p.3Chapter 1.2 --- Motivation and Goal --- p.6Chapter 1.3 --- Thesis Outline --- p.7Chapter 2 --- Background and Related Work --- p.8Chapter 2.1 --- Panoramic Image Navigation --- p.8Chapter 2.2 --- Image Mosaicing --- p.9Chapter 2.2.1 --- Image Registration --- p.10Chapter 2.2.2 --- Image Composition --- p.12Chapter 2.3 --- Immersive Display --- p.13Chapter 2.4 --- Video Streaming --- p.14Chapter 2.4.1 --- Video Coding --- p.15Chapter 2.4.2 --- Transport Protocol --- p.18Chapter 3 --- System Design --- p.19Chapter 3.1 --- System Architecture --- p.19Chapter 3.1.1 --- Video Capture Module --- p.19Chapter 3.1.2 --- Video Streaming Module --- p.23Chapter 3.1.3 --- Stitching and Rendering Module --- p.24Chapter 3.1.4 --- Display Module --- p.24Chapter 3.2 --- Design Issues --- p.25Chapter 3.2.1 --- Modular Design --- p.25Chapter 3.2.2 --- Scalability --- p.26Chapter 3.2.3 --- Workload distribution --- p.26Chapter 4 --- Panoramic Video Mosaic --- p.28Chapter 4.1 --- Video Mosaic to Image Mosaic --- p.28Chapter 4.1.1 --- Assumptions --- p.29Chapter 4.1.2 --- Processing Pipeline --- p.30Chapter 4.2 --- Camera Calibration --- p.33Chapter 4.2.1 --- Perspective Projection --- p.33Chapter 4.2.2 --- Distortion --- p.36Chapter 4.2.3 --- Calibration Procedure --- p.37Chapter 4.3 --- Panorama Generation --- p.39Chapter 4.3.1 --- Cylindrical and Spherical Panoramas --- p.39Chapter 4.3.2 --- Homography --- p.41Chapter 4.3.3 --- Homography Computation --- p.42Chapter 4.3.4 --- Error Minimization --- p.44Chapter 4.3.5 --- Stitching Multiple Images --- p.46Chapter 4.3.6 --- Seamless Composition --- p.47Chapter 4.4 --- Image Mosaic to Video Mosaic --- p.49Chapter 4.4.1 --- Varying Intensity --- p.49Chapter 4.4.2 --- Video Frame Management --- p.50Chapter 5 --- Immersive Display --- p.52Chapter 5.1 --- Human Perception System --- p.52Chapter 5.2 --- Creating Virtual Scene --- p.53Chapter 5.3 --- VisionStation --- p.54Chapter 5.3.1 --- F-Theta Lens --- p.55Chapter 5.3.2 --- VisionStation Geometry --- p.56Chapter 5.3.3 --- Sweet Spot Relocation and Projection --- p.57Chapter 5.3.4 --- Sweet Spot Relocation in Vector Representation --- p.61Chapter 6 --- Video Streaming --- p.65Chapter 6.1 --- Video Compression --- p.66Chapter 6.2 --- Transport Protocol --- p.66Chapter 6.3 --- Latency and Jitter Control --- p.67Chapter 6.4 --- Synchronization --- p.70Chapter 7 --- Implementation and Results --- p.71Chapter 7.1 --- Video Capture --- p.71Chapter 7.2 --- Video Streaming --- p.73Chapter 7.2.1 --- Video Encoding --- p.73Chapter 7.2.2 --- Streaming Protocol --- p.75Chapter 7.3 --- Implementation Results --- p.76Chapter 7.3.1 --- Indoor Scene --- p.76Chapter 7.3.2 --- Outdoor Scene --- p.78Chapter 7.4 --- Evaluation --- p.78Chapter 8 --- Conclusion --- p.83Chapter 8.1 --- Summary --- p.83Chapter 8.2 --- Future Directions --- p.84Chapter A --- Parallax --- p.8

    Interactive ubiquitous displays based on steerable projection

    Get PDF
    The ongoing miniaturization of computers and their embedding into the physical environment require new means of visual output. In the area of Ubiquitous Computing, flexible and adaptable display options are needed in order to enable the presentation of visual content in the physical environment. In this dissertation, we introduce the concepts of Display Continuum and Virtual Displays as new means of human-computer interaction. In this context, we present a realization of a Display Continuum based on steerable projection, and we describe a number of different interaction methods for manipulating this Display Continuum and the Virtual Displays placed on it.Mit zunehmender Miniaturisierung der Computer und ihrer Einbettung in der physikalischen Umgebung werden neue Arten der visuellen Ausgabe notwendig. Im Bereich des Ubiquitous Computing (Rechnerallgegenwart) werden flexible und anpassungsfähige Displays benötigt, um eine Anzeige von visuellen Inhalten unmittelbar in der physikalischen Umgebung zu ermöglichen. In dieser Dissertation führen wir das Konzept des Display-Kontinuums und der Virtuellen Displays als Instrument der Mensch-Maschine-Interaktion ein. In diesem Zusammenhang präsentieren wir eine mögliche Display-Kontinuum-Realisierung, die auf der Verwendung steuerbarer Projektion basiert, und wir beschreiben mehrere verschiedene Interaktionsmethoden, mit denen man das Display-Kontinuum und die darauf platzierten Virtuellen Displays steuern kann

    Pixel-Aligned Warping for Multiprojector Tiled Displays

    No full text

    Remote Visual Observation of Real Places Through Virtual Reality Headsets

    Get PDF
    Virtual Reality has always represented a fascinating yet powerful opportunity that has attracted studies and technology developments, especially since the latest release on the market of powerful high-resolution and wide field-of-view VR headsets. While the great potential of such VR systems is common and accepted knowledge, issues remain related to how to design systems and setups capable of fully exploiting the latest hardware advances. The aim of the proposed research is to study and understand how to increase the perceived level of realism and sense of presence when remotely observing real places through VR headset displays. Hence, to produce a set of guidelines that give directions to system designers about how to optimize the display-camera setup to enhance performance, focusing on remote visual observation of real places. The outcome of this investigation represents unique knowledge that is believed to be very beneficial for better VR headset designs towards improved remote observation systems. To achieve the proposed goal, this thesis presents a thorough investigation of existing literature and previous researches, which is carried out systematically to identify the most important factors ruling realism, depth perception, comfort, and sense of presence in VR headset observation. Once identified, these factors are further discussed and assessed through a series of experiments and usability studies, based on a predefined set of research questions. More specifically, the role of familiarity with the observed place, the role of the environment characteristics shown to the viewer, and the role of the display used for the remote observation of the virtual environment are further investigated. To gain more insights, two usability studies are proposed with the aim of defining guidelines and best practices. The main outcomes from the two studies demonstrate that test users can experience an enhanced realistic observation when natural features, higher resolution displays, natural illumination, and high image contrast are used in Mobile VR. In terms of comfort, simple scene layouts and relaxing environments are considered ideal to reduce visual fatigue and eye strain. Furthermore, sense of presence increases when observed environments induce strong emotions, and depth perception improves in VR when several monocular cues such as lights and shadows are combined with binocular depth cues. Based on these results, this investigation then presents a focused evaluation on the outcomes and introduces an innovative eye-adapted High Dynamic Range (HDR) approach, which the author believes to be of great improvement in the context of remote observation when combined with eye-tracked VR headsets. Within this purpose, a third user study is proposed to compare static HDR and eye-adapted HDR observation in VR, to assess that the latter can improve realism, depth perception, sense of presence, and in certain cases even comfort. Results from this last study confirmed the author expectations, proving that eye-adapted HDR and eye tracking should be used to achieve best visual performances for remote observation in modern VR systems
    corecore