101 research outputs found

    THE UNIVERSAL MEDIA BOOK

    Get PDF
    We explore the integration of projected imagery with a physical book that acts as a tangible interface to multimedia data. Using a camera and projector pair, a tracking framework is presented wherein the 3D position of planar pages are monitored as they are turned back and forth by a user, and data is correctly warped and projected onto each page at interactive rates to provide the user with an intuitive mixed-reality experience. The book pages are blank, so traditional camera-based approaches to tracking physical features on the display surface do not apply. Instead, in each frame, feature points are independently extracted from the camera and projector images, and matched to recover the geometry of the pages in motion. The book can be loaded with multimedia content, including images and videos. In addition, volumetric datasets can be explored by removing a page from the book and using it as a tool to navigate through a virtual 3D volume

    Detection of Non-Stationary Photometric Perturbations on Projection Screens

    Get PDF
    Interfaces based on projection screens have become increasingly more popular in recent years, mainly due to the large screen size and resolution that they provide, as well as their stereo-vision capabilities. This work shows a local method for real-time detection of non-stationary photometric perturbations in projected images by means of computer vision techniques. The method is based on the computation of differences between the images in the projector’s frame buffer and the corresponding images on the projection screen observed by the camera. It is robust under spatial variations in the intensity of light emitted by the projector on the projection surface and also robust under stationary photometric perturbations caused by external factors. Moreover, we describe the experiments carried out to show the reliability of the method

    HOLOGRAPHICS: Combining Holograms with Interactive Computer Graphics

    Get PDF
    Among all imaging techniques that have been invented throughout the last decades, computer graphics is one of the most successful tools today. Many areas in science, entertainment, education, and engineering would be unimaginable without the aid of 2D or 3D computer graphics. The reason for this success story might be its interactivity, which is an important property that is still not provided efficiently by competing technologies – such as holography. While optical holography and digital holography are limited to presenting a non-interactive content, electroholography or computer generated holograms (CGH) facilitate the computer-based generation and display of holograms at interactive rates [2,3,29,30]. Holographic fringes can be computed by either rendering multiple perspective images, then combining them into a stereogram [4], or simulating the optical interference and calculating the interference pattern [5]. Once computed, such a system dynamically visualizes the fringes with a holographic display. Since creating an electrohologram requires processing, transmitting, and storing a massive amount of data, today’s computer technology still sets the limits for electroholography. To overcome some of these performance issues, advanced reduction and compression methods have been developed that create truly interactive electroholograms. Unfortunately, most of these holograms are relatively small, low resolution, and cover only a small color spectrum. However, recent advances in consumer graphics hardware may reveal potential acceleration possibilities that can overcome these limitations [6]. In parallel to the development of computer graphics and despite their non-interactivity, optical and digital holography have created new fields, including interferometry, copy protection, data storage, holographic optical elements, and display holograms. Especially display holography has conquered several application domains. Museum exhibits often use optical holograms because they can present 3D objects with almost no loss in visual quality. In contrast to most stereoscopic or autostereoscopic graphics displays, holographic images can provide all depth cues—perspective, binocular disparity, motion parallax, convergence, and accommodation—and theoretically can be viewed simultaneously from an unlimited number of positions. Displaying artifacts virtually removes the need to build physical replicas of the original objects. In addition, optical holograms can be used to make engineering, medical, dental, archaeological, and other recordings—for teaching, training, experimentation and documentation. Archaeologists, for example, use optical holograms to archive and investigate ancient artifacts [7,8]. Scientists can use hologram copies to perform their research without having access to the original artifacts or settling for inaccurate replicas. Optical holograms can store a massive amount of information on a thin holographic emulsion. This technology can record and reconstruct a 3D scene with almost no loss in quality. Natural color holographic silver halide emulsion with grain sizes of 8nm is today’s state-of-the-art [14]. Today, computer graphics and raster displays offer a megapixel resolution and the interactive rendering of megabytes of data. Optical holograms, however, provide a terapixel resolution and are able to present an information content in the range of terabytes in real-time. Both are dimensions that will not be reached by computer graphics and conventional displays within the next years – even if Moore’s law proves to hold in future. Obviously, one has to make a decision between interactivity and quality when choosing a display technology for a particular application. While some applications require high visual realism and real-time presentation (that cannot be provided by computer graphics), others depend on user interaction (which is not possible with optical and digital holograms). Consequently, holography and computer graphics are being used as tools to solve individual research, engineering, and presentation problems within several domains. Up until today, however, these tools have been applied separately. The intention of the project which is summarized in this chapter is to combine both technologies to create a powerful tool for science, industry and education. This has been referred to as HoloGraphics. Several possibilities have been investigated that allow merging computer generated graphics and holograms [1]. The goal is to combine the advantages of conventional holograms (i.e. extremely high visual quality and realism, support for all depth queues and for multiple observers at no computational cost, space efficiency, etc.) with the advantages of today’s computer graphics capabilities (i.e. interactivity, real-time rendering, simulation and animation, stereoscopic and autostereoscopic presentation, etc.). The results of these investigations are presented in this chapter

    Projector-Based Augmentation

    Get PDF
    Projector-based augmentation approaches hold the potential of combining the advantages of well-establishes spatial virtual reality and spatial augmented reality. Immersive, semi-immersive and augmented visualizations can be realized in everyday environments – without the need for special projection screens and dedicated display configurations. Limitations of mobile devices, such as low resolution and small field of view, focus constrains, and ergonomic issues can be overcome in many cases by the utilization of projection technology. Thus, applications that do not require mobility can benefit from efficient spatial augmentations. Examples range from edutainment in museums (such as storytelling projections onto natural stone walls in historical buildings) to architectural visualizations (such as augmentations of complex illumination simulations or modified surface materials in real building structures). This chapter describes projector-camera methods and multi-projector techniques that aim at correcting geometric aberrations, compensating local and global radiometric effects, and improving focus properties of images projected onto everyday surfaces

    deForm: An interactive malleable surface for capturing 2.5D arbitrary objects, tools and touch

    Get PDF
    We introduce a novel input device, deForm, that supports 2.5D touch gestures, tangible tools, and arbitrary objects through real-time structured light scanning of a malleable surface of interaction. DeForm captures high-resolution surface deformations and 2D grey-scale textures of a gel surface through a three-phase structured light 3D scanner. This technique can be combined with IR projection to allow for invisible capture, providing the opportunity for co-located visual feedback on the deformable surface. We describe methods for tracking fingers, whole hand gestures, and arbitrary tangible tools. We outline a method for physically encoding fiducial marker information in the height map of tangible tools. In addition, we describe a novel method for distinguishing between human touch and tangible tools, through capacitive sensing on top of the input surface. Finally we motivate our device through a number of sample applications

    Material Recognition Meets 3D Reconstruction : Novel Tools for Efficient, Automatic Acquisition Systems

    Get PDF
    For decades, the accurate acquisition of geometry and reflectance properties has represented one of the major objectives in computer vision and computer graphics with many applications in industry, entertainment and cultural heritage. Reproducing even the finest details of surface geometry and surface reflectance has become a ubiquitous prerequisite in visual prototyping, advertisement or digital preservation of objects. However, today's acquisition methods are typically designed for only a rather small range of material types. Furthermore, there is still a lack of accurate reconstruction methods for objects with a more complex surface reflectance behavior beyond diffuse reflectance. In addition to accurate acquisition techniques, the demand for creating large quantities of digital contents also pushes the focus towards fully automatic and highly efficient solutions that allow for masses of objects to be acquired as fast as possible. This thesis is dedicated to the investigation of basic components that allow an efficient, automatic acquisition process. We argue that such an efficient, automatic acquisition can be realized when material recognition "meets" 3D reconstruction and we will demonstrate that reliably recognizing the materials of the considered object allows a more efficient geometry acquisition. Therefore, the main objectives of this thesis are given by the development of novel, robust geometry acquisition techniques for surface materials beyond diffuse surface reflectance, and the development of novel, robust techniques for material recognition. In the context of 3D geometry acquisition, we introduce an improvement of structured light systems, which are capable of robustly acquiring objects ranging from diffuse surface reflectance to even specular surface reflectance with a sufficient diffuse component. We demonstrate that the resolution of the reconstruction can be increased significantly for multi-camera, multi-projector structured light systems by using overlappings of patterns that have been projected under different projector poses. As the reconstructions obtained by applying such triangulation-based techniques still contain high-frequency noise due to inaccurately localized correspondences established for images acquired under different viewpoints, we furthermore introduce a novel geometry acquisition technique that complements the structured light system with additional photometric normals and results in significantly more accurate reconstructions. In addition, we also present a novel method to acquire the 3D shape of mirroring objects with complex surface geometry. The aforementioned investigations on 3D reconstruction are accompanied by the development of novel tools for reliable material recognition which can be used in an initial step to recognize the present surface materials and, hence, to efficiently select the subsequently applied appropriate acquisition techniques based on these classified materials. In the scope of this thesis, we therefore focus on material recognition for scenarios with controlled illumination as given in lab environments as well as scenarios with natural illumination that are given in photographs of typical daily life scenes. Finally, based on the techniques developed in this thesis, we provide novel concepts towards efficient, automatic acquisition systems

    Cavlectometry: Towards Holistic Reconstruction of Large Mirror Objects

    Full text link
    We introduce a method based on the deflectometry principle for the reconstruction of specular objects exhibiting significant size and geometric complexity. A key feature of our approach is the deployment of an Automatic Virtual Environment (CAVE) as pattern generator. To unfold the full power of this extraordinary experimental setup, an optical encoding scheme is developed which accounts for the distinctive topology of the CAVE. Furthermore, we devise an algorithm for detecting the object of interest in raw deflectometric images. The segmented foreground is used for single-view reconstruction, the background for estimation of the camera pose, necessary for calibrating the sensor system. Experiments suggest a significant gain of coverage in single measurements compared to previous methods. To facilitate research on specular surface reconstruction, we will make our data set publicly available
    • …
    corecore