5 research outputs found

    Multi-Projector Content Preservation with Linear Filters

    Get PDF
    Using aligned overlapping image projectors provides several ad-vantages when compared to a single projector: increased bright-ness, additional redundancy, and increased pixel density withina region of the screen. Aligning content between projectors isachieved by applying space transformation operations to the de-sired output. The transformation operations often degrade the qual-ity of the original image due to sampling and quantization. Thetransformation applied for a given projector is typically done in iso-lation of all other content-projector transformations. However, it ispossible to warp the images with prior knowledge of each othersuch that they utilize the increase in effective pixel density. Thisallows for an increase in the perceptual quality of the resultingstacked content. This paper presents a novel method of increas-ing the perceptual quality within multi-projector configurations. Amachine learning approach is used to train a linear filtering basedmodel that conditions the individual projected images on each othe

    A compressive light field projection system

    Get PDF
    For about a century, researchers and experimentalists have strived to bring glasses-free 3D experiences to the big screen. Much progress has been made and light field projection systems are now commercially available. Unfortunately, available display systems usually employ dozens of devices making such setups costly, energy inefficient, and bulky. We present a compressive approach to light field synthesis with projection devices. For this purpose, we propose a novel, passive screen design that is inspired by angle-expanding Keplerian telescopes. Combined with high-speed light field projection and nonnegative light field factorization, we demonstrate that compressive light field projection is possible with a single device. We build a prototype light field projector and angle-expanding screen from scratch, evaluate the system in simulation, present a variety of results, and demonstrate that the projector can alternatively achieve super-resolved and high dynamic range 2D image display when used with a conventional screen.MIT Media Lab ConsortiumNatural Sciences and Engineering Research Council of Canada (NSERC Postdoctoral Fellowship)National Science Foundation (U.S.) (Grant NSF grant 0831281

    A Content Enhancement Framework for Multi-Projector Systems

    Get PDF
    Projectors are a convenient technology for displaying content on large, abnormal, or temporary surfaces where mounting other forms of light emitting devices is too impractical or too expensive. Common uses of projectors include movie cinemas, concert halls, 3D model colourization, planetariums, etc. Many of these applications require multiple projectors to either cover the entire display surface, like planetariums, or to achieve the require brightness, like outdoor projection. Aligning the content between projectors is typically required to ensure that overlapping regions between projectors display the same content. Naive approaches of aligning content treat the relationship between the content and a projector independently of all other projectors in the configuration. Aligning content can limit the quality of the superimposed image as high frequency signals are often degraded during the alignment process. Previous works have shown it is possible to improve the perceptual quality of the aligned content by giving each content-to-projector transformation prior knowledge of all projectors in the configuration. However, these works either make theoretical assumptions, require special hardware, severely limit the types of applications their systems work on, or only use qualitative analysis to evaluate their system's performance. In this work, a framework capable of simulating a multi-projector configuration for any number of projectors on a flat surface is proposed. A method of comparing the ideal content with the projected content is developed using the proposed simulation in conjunction with an existing image comparison technique. Different system setups are tested for a two projector configuration. The quality of each configuration is measured using the developed comparison metric across a dataset of natural images. Finally, the proposed framework is used to train three different models, in an end-to-end fashion, that are capable of improving the perceptual quality of the superimposed image. The first two models are parametric and content independent, while the third model is non-parametric and content dependent. The first model directly integrates with existing interpolation methods used during the content-to-projector alignment. The second model applies a post transformation filtering operation using a set of learned linear convolutional kernels. The third model directly optimizes the projected images to improve the perceptual quality of the superimposed image

    Content-Adaptive Non-Stationary Projector Resolution Enhancement

    Get PDF
    For any projection system, one goal will surely be to maximize the quality of projected imagery at a minimized hardware cost, which is considered a challenging engineering problem. Experience in applying different image filters and enhancements to projected video suggests quite clearly that the quality of a projected enhanced video is very much a function of the content of the video itself. That is, to first order, whether the video contains content which is moving as opposed to still plays an important role in the video quality, since the human visual system tolerates much more blur in moving imagery but at the same time is significantly sensitive to the flickering and aliasing caused by moving sharp textures. Furthermore, the spatial and statistical characteristics of text and non-text images are quite distinct. We would, therefore, assert that the text-like, moving and background pixels of a given video stream should be enhanced differently using class-dependent video enhancement filters to achieve maximum visual quality. In this thesis, we present a novel text-dependent content enhancement scheme, a novel motion-dependent content enhancement scheme and a novel content-adaptive resolution enhancement scheme based on a text-like / non-text-like classification and a pixel-wise moving / non-moving classification, with the actual enhancement obtained via class--dependent Wiener deconvolution filtering. Given an input image, the text and motion detection methods are used to generate binary masks to indicate the location of the text and moving regions in the video stream. Then enhanced images are obtained by applying a plurality of class-dependent enhancement filters, with text-like regions sharpened more than the background and moving regions sharpened less than the background. Later, one or more resulting enhanced images are combined into a composite output image based on the corresponding mask of different features. Finally, a higher resolution projected video stream is conducted by controlling one or more projectors to project the plurality of output frame streams in a rapid overlapping way. Experimental results on the test images and videos show that the proposed schemes all offer improved visual quality over projection without enhancement as well as compared to a recent state-of-the-art enhancement method. Particularly, the proposed content-adaptive resolution enhancement scheme increases the PSNR value by at least 18.2% and decreases MSE value by at least 25%

    Remote Visual Observation of Real Places Through Virtual Reality Headsets

    Get PDF
    Virtual Reality has always represented a fascinating yet powerful opportunity that has attracted studies and technology developments, especially since the latest release on the market of powerful high-resolution and wide field-of-view VR headsets. While the great potential of such VR systems is common and accepted knowledge, issues remain related to how to design systems and setups capable of fully exploiting the latest hardware advances. The aim of the proposed research is to study and understand how to increase the perceived level of realism and sense of presence when remotely observing real places through VR headset displays. Hence, to produce a set of guidelines that give directions to system designers about how to optimize the display-camera setup to enhance performance, focusing on remote visual observation of real places. The outcome of this investigation represents unique knowledge that is believed to be very beneficial for better VR headset designs towards improved remote observation systems. To achieve the proposed goal, this thesis presents a thorough investigation of existing literature and previous researches, which is carried out systematically to identify the most important factors ruling realism, depth perception, comfort, and sense of presence in VR headset observation. Once identified, these factors are further discussed and assessed through a series of experiments and usability studies, based on a predefined set of research questions. More specifically, the role of familiarity with the observed place, the role of the environment characteristics shown to the viewer, and the role of the display used for the remote observation of the virtual environment are further investigated. To gain more insights, two usability studies are proposed with the aim of defining guidelines and best practices. The main outcomes from the two studies demonstrate that test users can experience an enhanced realistic observation when natural features, higher resolution displays, natural illumination, and high image contrast are used in Mobile VR. In terms of comfort, simple scene layouts and relaxing environments are considered ideal to reduce visual fatigue and eye strain. Furthermore, sense of presence increases when observed environments induce strong emotions, and depth perception improves in VR when several monocular cues such as lights and shadows are combined with binocular depth cues. Based on these results, this investigation then presents a focused evaluation on the outcomes and introduces an innovative eye-adapted High Dynamic Range (HDR) approach, which the author believes to be of great improvement in the context of remote observation when combined with eye-tracked VR headsets. Within this purpose, a third user study is proposed to compare static HDR and eye-adapted HDR observation in VR, to assess that the latter can improve realism, depth perception, sense of presence, and in certain cases even comfort. Results from this last study confirmed the author expectations, proving that eye-adapted HDR and eye tracking should be used to achieve best visual performances for remote observation in modern VR systems
    corecore