836 research outputs found

    Capturing and Reconstructing the Appearance of Complex {3D} Scenes

    No full text
    In this thesis, we present our research on new acquisition methods for reflectance properties of real-world objects. Specifically, we first show a method for acquiring spatially varying densities in volumes of translucent, gaseous material with just a single image. This makes the method applicable to constantly changing phenomena like smoke without the use of high-speed camera equipment. Furthermore, we investigated how two well known techniques -- synthetic aperture confocal imaging and algorithmic descattering -- can be combined to help looking through a translucent medium like fog or murky water. We show that the depth at which we can still see an object embedded in the scattering medium is increased. In a related publication, we show how polarization and descattering based on phase-shifting can be combined for efficient 3D~scanning of translucent objects. Normally, subsurface scattering hinders the range estimation by offsetting the peak intensity beneath the surface away from the point of incidence. With our method, the subsurface scattering is reduced to a minimum and therefore reliable 3D~scanning is made possible. Finally, we present a system which recovers surface geometry, reflectance properties of opaque objects, and prevailing lighting conditions at the time of image capture from just a small number of input photographs. While there exist previous approaches to recover reflectance properties, our system is the first to work on images taken under almost arbitrary, changing lighting conditions. This enables us to use images we took from a community photo collection website

    SBVLC:Secure Barcode-based Visible Light Communication for Smartphones

    Get PDF
    2D barcodes have enjoyed a significant penetration rate in mobile applications. This is largely due to the extremely low barrier to adoption – almost every camera-enabled smartphone can scan 2D barcodes. As an alternative to NFC technology, 2D barcodes have been increasingly used for security-sensitive mobile applications including mobile payments and personal identification. However, the security of barcode-based communication in mobile applications has not been systematically studied. Due to the visual nature, 2D barcodes are subject to eavesdropping when they are displayed on the smartphone screens. On the other hand, the fundamental design principles of 2D barcodes make it difficult to add security features. In this paper, we propose SBVLC - a secure system for barcode-based visible light communication (VLC) between smartphones. We formally analyze the security of SBVLC based on geometric models and propose physical security enhancement mechanisms for barcode communication by manipulating screen view angles and leveraging user-induced motions. We then develop three secure data exchange schemes that encode information in barcode streams. These schemes are useful in many security-sensitive mobile applications including private information sharing, secure device pairing, and contactless payment. SBVLC is evaluated through extensive experiments on both Android and iOS smartphones

    Highlighted depth-of-field photography: Shining light on focus

    Get PDF
    We present a photographic method to enhance intensity differences between objects at varying distances from the focal plane. By combining a unique capture procedure with simple image processing techniques, the detected brightness of an object is decreased proportional to its degree of defocus. A camera-projector system casts distinct grid patterns onto a scene to generate a spatial distribution of point reflections. These point reflections relay a relative measure of defocus that is utilized in postprocessing to generate a highlighted DOF photograph. Trade-offs between three different projectorprocessing pairs are analyzed, and a model is developed to help describe a new intensity-dependent depth of field that is controlled by the pattern of illumination. Results are presented for a primary single snapshot design as well as a scanning method and a comparison method. As an application, automatic matting results are presented.Alfred P. Sloan Foundatio

    Practical surface light fields

    Get PDF
    The rendering of photorealistic surface appearance is one of the main challenges facing modern computer graphics. Image-based approaches have become increasingly important because they can capture the appearance of a wide variety of physical surfaces with complex reflectance behavior. In this dissertation, I focus on surface light fields, an image-based representation of view-dependent and spatially-varying appearance. Constructing a surface light field can be a time-consuming and tedious process. The data sizes are quite large, often requiring multiple gigabytes to represent complex reflectance properties. The result can only be viewed after a lengthy post-process is complete, so it can be difficult to determine when the light field is sufficiently sampled. Often, uncertainty about the sampling density leads users to capture many more images than necessary in order to guarantee adequate coverage. To address these problems, I present several approaches to simplify the capture of surface light fields. The first is a “human-in-the-loop” interactive feedback system based on the online svd. As each image is captured, it is incorporated into the representation in a streaming fashion and displayed to the user. In this way, the user receives direct feedback about the capture process, and can use this feedback to improve the sampling. To avoid the problems of discretization and resampling, I used incremental weighted least squares, a subset of radial basis function which allows for incremental local construction and fast rendering on graphics hardware. Lastly, I address the limitation of fixed lighting by describing a system that captures the surface light field of an object under synthetic lighting

    Optical thin film measurement by interferometric fringe projection and fluorescence stimulated emission

    Get PDF
    The introduction of a new technique for metrology of thin liquid films to give both the profile of the exterior surface and information on the thickness of the film is the main focus of this research. The proposed approach is based on the use of fringe projection system with a narrow band laser illumination and a high concentration of fluorescent dye dissolved in the fluid in order to generate fluorescence emission from minimum thickness of the film (i.e. the top few microns). The method relies on calculation of an interference phase term and the modulation depth of the fringes created by means of a twin fibre configuration. The characterisation of candidate fluorescent dyes in terms of absorption, related to the depth of penetration of the incident light into the dye and their fluorescence emission efficiency is presented and their application in full field imaging experiments is evaluated. A strong focus of the technique proposed is its flexibility and versatility allowing its extension to phase stepping techniques applied to determine the (fringe) phase map from static and dynamic fluids. Some experiments are carried out using the best dye solution in terms of fluorescence emission and light depth penetration. On the basis of the phase-height relationship achieved during the calibration process, the proposed measurement system is applied for the shape measurement of some static fluids. The profile of the exterior surface of these fluids is investigated by means of phasestepping technique and the resolution of the measurements is estimated. Furthermore a flow rig set-up based on inclined system (gravity assisted) is presented in order to test the shape measurement system in presence of real liquid flows. Different liquid flow thicknesses are processed and analysed. Example data will be included from some fluid films of known geometry in order to validate the method

    Integration of multiple vision systems and toolbox development

    Get PDF
    Depending on the required coverage, multiple cameras with different fields of view, positions and orientations can be employed to form a motion tracking system. Correctly and efficiently designing and setting up a multi-camera vision system presents a technical challenge. This thesis describes the development and application of a toolbox that can help the user to design a multi-camera vision system. Using the parameters of cameras, including their positions and orientations, the toolbox can calculate the volume covered by the system and generate its visualization for a given tracking area. The cameras can be repositioned and reoriented using toolbox to generate the visualization of the volume covered. Finally, this thesis describes how to practically implement and achieve a proper multi-camera setup. This thesis describes the integration of multiple cameras for vision system development based on Svoboda\u27s and Horn\u27s algorithms. Also, Dijkstra\u27s algorithm is implemented to estimate the tracking error between the master vision system and any of the slave vision systems. The toolbox is evaluated by comparing the calculated and actual covered volumes of a multi-camera system. The toolbox also is evaluated for its error estimation. The multi-camera vision system design is implemented using the developed toolbox for a virtual fastening operation of an aircraft fuselage in a computer-automated virtual environment (CAVE) --Abstract, page iii
    corecore