712 research outputs found

    End-to-end Projector Photometric Compensation

    Full text link
    Projector photometric compensation aims to modify a projector input image such that it can compensate for disturbance from the appearance of projection surface. In this paper, for the first time, we formulate the compensation problem as an end-to-end learning problem and propose a convolutional neural network, named CompenNet, to implicitly learn the complex compensation function. CompenNet consists of a UNet-like backbone network and an autoencoder subnet. Such architecture encourages rich multi-level interactions between the camera-captured projection surface image and the input image, and thus captures both photometric and environment information of the projection surface. In addition, the visual details and interaction information are carried to deeper layers along the multi-level skip convolution layers. The architecture is of particular importance for the projector compensation task, for which only a small training dataset is allowed in practice. Another contribution we make is a novel evaluation benchmark, which is independent of system setup and thus quantitatively verifiable. Such benchmark is not previously available, to our best knowledge, due to the fact that conventional evaluation requests the hardware system to actually project the final results. Our key idea, motivated from our end-to-end problem formulation, is to use a reasonable surrogate to avoid such projection process so as to be setup-independent. Our method is evaluated carefully on the benchmark, and the results show that our end-to-end learning solution outperforms state-of-the-arts both qualitatively and quantitatively by a significant margin.Comment: To appear in the 2019 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Source code and dataset are available at https://github.com/BingyaoHuang/compenne

    Projector-Based Augmentation

    Get PDF
    Projector-based augmentation approaches hold the potential of combining the advantages of well-establishes spatial virtual reality and spatial augmented reality. Immersive, semi-immersive and augmented visualizations can be realized in everyday environments – without the need for special projection screens and dedicated display configurations. Limitations of mobile devices, such as low resolution and small field of view, focus constrains, and ergonomic issues can be overcome in many cases by the utilization of projection technology. Thus, applications that do not require mobility can benefit from efficient spatial augmentations. Examples range from edutainment in museums (such as storytelling projections onto natural stone walls in historical buildings) to architectural visualizations (such as augmentations of complex illumination simulations or modified surface materials in real building structures). This chapter describes projector-camera methods and multi-projector techniques that aim at correcting geometric aberrations, compensating local and global radiometric effects, and improving focus properties of images projected onto everyday surfaces

    Real-Time Adaptive Radiometric Compensation

    Get PDF
    Recent radiometric compensation techniques make it possible to project images onto colored and textured surfaces. This is realized with projector-camera systems by scanning the projection surface on a per-pixel basis. With the captured information, a compensation image is calculated that neutralizes geometric distortions and color blending caused by the underlying surface. As a result, the brightness and the contrast of the input image is reduced compared to a conventional projection onto a white canvas. If the input image is not manipulated in its intensities, the compensation image can contain values that are outside the dynamic range of the projector. They will lead to clipping errors and to visible artifacts on the surface. In this article, we present a novel algorithm that dynamically adjusts the content of the input images before radiometric compensation is carried out. This reduces the perceived visual artifacts while simultaneously preserving a maximum of luminance and contrast. The algorithm is implemented entirely on the GPU and is the first of its kind to run in real-time

    Hand Held 3D Scanning for Cultural Heritage: Experimenting Low Cost Structure Sensor Scan.

    Get PDF
    In the last years 3D scanning has become an important resource in many fields, in particular it has played a key role in study and preservation of Cultural Heritage. Moreover today, thanks to the miniaturization of electronic components, it has been possible produce a new category of 3D scanners, also known as handheld scanners. Handheld scanners combine a relatively low cost with the advantage of the portability. The aim of this chapter is two-fold: first, a survey about the most recent 3D handheld scanners is presented. As second, a study about the possibility to employ the handheld scanners in the field of Cultural Heritage is conducted. In this investigation, a doorway of the Benedictine Monastery of Catania, has been used as study case for a comparison between stationary Time of Flight scanner, photogrammetry-based 3D reconstruction and handheld scanning. The study is completed by an evaluation of the meshes quality obtained with the three different kinds of technology and a 3D modeling reproduction of the case-study doorway

    Event-based Simultaneous Localization and Mapping: A Comprehensive Survey

    Full text link
    In recent decades, visual simultaneous localization and mapping (vSLAM) has gained significant interest in both academia and industry. It estimates camera motion and reconstructs the environment concurrently using visual sensors on a moving robot. However, conventional cameras are limited by hardware, including motion blur and low dynamic range, which can negatively impact performance in challenging scenarios like high-speed motion and high dynamic range illumination. Recent studies have demonstrated that event cameras, a new type of bio-inspired visual sensor, offer advantages such as high temporal resolution, dynamic range, low power consumption, and low latency. This paper presents a timely and comprehensive review of event-based vSLAM algorithms that exploit the benefits of asynchronous and irregular event streams for localization and mapping tasks. The review covers the working principle of event cameras and various event representations for preprocessing event data. It also categorizes event-based vSLAM methods into four main categories: feature-based, direct, motion-compensation, and deep learning methods, with detailed discussions and practical guidance for each approach. Furthermore, the paper evaluates the state-of-the-art methods on various benchmarks, highlighting current challenges and future opportunities in this emerging research area. A public repository will be maintained to keep track of the rapid developments in this field at {\url{https://github.com/kun150kun/ESLAM-survey}}

    Modeling Color Appearance in Augmented Reality

    Get PDF
    Augmented reality (AR) is a developing technology that is expected to become the next interface between humans and computers. One of the most common designs of AR devices is the optical see-through head- mounted display (HMD). In this design, the virtual content presented on the displays embedded inside the device gets optically superimposed on the real world which results in the virtual content being transparent. Color appearance in see-through designs of AR is a complicated subject, because it depends on many factors including the ambient light, the color appearance of the virtual content and color appearance of the real background. Similar to display technology, it is vital to control the color appearance of content for many applications of AR. In this research, color appearance in the see-through design of augmented reality environment is studied and modeled. Using a bench-top optical mixing apparatus as an AR simulator, objective measurements of mixed colors in AR were performed to study the light behavior in AR environment. Psychophysical color matching experiments were performed to understand color perception in AR. These experiments were performed first for simple 2D stimuli with single color both as background and foreground and later for more visually complex stimuli to better represent real content that is presented in AR. Color perception in AR environment was compared to color perception on a display which showed they are different from each other. The applicability of the CAM16 color appearance model, one of the most comprehensive current color appearance models, in AR environment was evaluated. The results showed that the CAM16 is not accurate in predicting the color appearance in AR environment. In order to model color appearance in AR environment, four approaches were developed using modifications in tristimulus and color appearance spaces, and the best performance was found to be for Approach 2 which was based on predicting the tristimulus values of the mixed content from the background and foreground color

    Less Light, Better Bite: How Ambient Lighting Influences Taste Perceptions

    Get PDF
    Atmospheric factors within a retail environment provide efficient and effective methods for influencing customer behavior. Drawing on the concept of sensory compensation, this research investigates how ambient lighting influences taste perceptions. Three studies demonstrate that dim lighting enhances taste perceptions. The results of Studies 1a and 1b provide support that low lighting positively influences consumers\u27 perceived taste of single taste dimension foods (e.g., sweet). Study 2 shows the number of taste dimensions (e.g., sweet vs. sweet and salty) stimulated serves as a boundary condition, attenuating the significant effect of dim lighting on taste perceptions
    • …
    corecore