25 research outputs found

    Laser speckle photography for surface tampering detection

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 59-61).It is often desirable to detect whether a surface has been touched, even when the changes made to that surface are too subtle to see in a pair of before and after images. To address this challenge, we introduce a new imaging technique that combines computational photography and laser speckle imaging. Without requiring controlled laboratory conditions, our method is able to detect surface changes that would be indistinguishable in regular photographs. It is also mobile and does not need to be present at the time of contact with the surface, making it well suited for applications where the surface of interest cannot be constantly monitored. Our approach takes advantage of the fact that tiny surface deformations cause phase changes in reflected coherent light which alter the speckle pattern visible under laser illumination. We take before and after images of the surface under laser light and can detect subtle contact by correlating the speckle patterns in these images. A key challenge we address is that speckle imaging is very sensitive to the location of the camera, so removing and reintroducing the camera requires high-accuracy viewpoint alignment. To this end, we use a combination of computational rephotography and correlation analysis of the speckle pattern as a function of camera translation. Our technique provides a reliable way of detecting subtle surface contact at a level that was previously only possible under laboratory conditions. With our system, the detection of these subtle surface changes can now be brought into the wild.by YiChang Shih.S.M

    Painting-to-3D Model Alignment Via Discriminative Visual Elements

    Get PDF
    International audienceThis paper describes a technique that can reliably align arbitrary 2D depictions of an architectural site, including drawings, paintings and historical photographs, with a 3D model of the site. This is a tremendously difficult task as the appearance and scene structure in the 2D depictions can be very different from the appearance and geometry of the 3D model, e.g., due to the specific rendering style, drawing error, age, lighting or change of seasons. In addition, we face a hard search problem: the number of possible alignments of the painting to a large 3D model, such as a partial reconstruction of a city, is huge. To address these issues, we develop a new compact representation of complex 3D scenes. The 3D model of the scene is represented by a small set of discriminative visual elements that are automatically learnt from rendered views. Similar to object detection, the set of visual elements, as well as the weights of individual features for each element, are learnt in a discriminative fashion. We show that the learnt visual elements are reliably matched in 2D depictions of the scene despite large variations in rendering style (e.g. watercolor, sketch, historical photograph) and structural changes (e.g. missing scene parts, large occluders) of the scene. We demonstrate an application of the proposed approach to automatic re-photography to find an approximate viewpoint of historical paintings and photographs with respect to a 3D model of the site. The proposed alignment procedure is validated via a human user study on a new database of paintings and sketches spanning several sites. The results demonstrate that our algorithm produces significantly better alignments than several baseline methods

    Computational Re-Photography

    Get PDF
    Rephotographers aim to recapture an existing photograph from the same viewpoint. A historical photograph paired with a well-aligned modern rephotograph can serve as a remarkable visualization of the passage of time. However, the task of rephotography is tedious and often imprecise, because reproducing the viewpoint of the original photograph is challenging. The rephotographer must disambiguate between the six degrees of freedom of 3D translation and rotation, and the confounding similarity between the effects of camera zoom and dolly. We present a real-time estimation and visualization technique for rephotography that helps users reach a desired viewpoint during capture. The input to our technique is a reference image taken from the desired viewpoint. The user moves through the scene with a camera and follows our visualization to reach the desired viewpoint. We employ computer vision techniques to compute the relative viewpoint difference. We guide 3D movement using two 2D arrows. We demonstrate the success of our technique by rephotographing historical images and conducting user studies

    Virtual Rephotography: Novel View Prediction Error for 3D Reconstruction

    Full text link
    The ultimate goal of many image-based modeling systems is to render photo-realistic novel views of a scene without visible artifacts. Existing evaluation metrics and benchmarks focus mainly on the geometric accuracy of the reconstructed model, which is, however, a poor predictor of visual accuracy. Furthermore, using only geometric accuracy by itself does not allow evaluating systems that either lack a geometric scene representation or utilize coarse proxy geometry. Examples include light field or image-based rendering systems. We propose a unified evaluation approach based on novel view prediction error that is able to analyze the visual quality of any method that can render novel views from input images. One of the key advantages of this approach is that it does not require ground truth geometry. This dramatically simplifies the creation of test datasets and benchmarks. It also allows us to evaluate the quality of an unknown scene during the acquisition and reconstruction process, which is useful for acquisition planning. We evaluate our approach on a range of methods including standard geometry-plus-texture pipelines as well as image-based rendering techniques, compare it to existing geometry-based benchmarks, and demonstrate its utility for a range of use cases.Comment: 10 pages, 12 figures, paper was submitted to ACM Transactions on Graphics for revie

    Location recognition over large time lags

    Get PDF
    Would it be possible to automatically associate ancient pictures to modern ones and create fancy cultural heritage city maps? We introduce here the task of recognizing the location depicted in an old photo given modern annotated images collected from the Internet. We present an extensive analysis on different features, looking for the most discriminative and most robust to the image variability induced by large time lags. Moreover, we show that the described task benefits from domain adaptation

    A transformation-aware perceptual image metric

    Get PDF
    Predicting human visual perception has several applications such as compression, rendering, editing and retargeting. Current approaches however, ignore the fact that the human visual system compensates for geometric transformations, e. g., we see that an image and a rotated copy are identical. Instead, they will report a large, false-positive difference. At the same time, if the transformations become too strong or too spatially incoherent, comparing two images indeed gets increasingly difficult. Between these two extrema, we propose a system to quantify the effect of transformations, not only on the perception of image differences, but also on saliency. To this end, we first fit local homographies to a given optical flow field and then convert this field into a field of elementary transformations such as translation, rotation, scaling, and perspective. We conduct a perceptual experiment quantifying the increase of difficulty when compensating for elementary transformations. Transformation entropy is proposed as a novel measure of complexity in a flow field. This representation is then used for applications, such as comparison of non-aligned images, where transformations cause threshold elevation, and detection of salient transformations

    Computational Re-Photography

    Get PDF
    Pořizování fotografií podle historických snímků je předmětem zájmu jak pro fotografy, tak i badatele. Tímto způsobem lze pozorovat vývoj místa, případně jeho okolí, v časovém rozmezí. Problém může nastat v případě nalezení stejného pohledu jako u referenčního snímku. Nalezení totožného pohledu se obvykle řeší jen manuálně. Fotografové se v takovém případě musí spoléhat na odhad a fotografický um. Tento způsob může být nepřesný a pro fotografa značně nepohodlný. Řešení představuje refotografický algoritmus, který využívá postupy z oboru počítačového vidění.Photographing by historical picture is subject of interest both for photographers and researcher. In this way, it is possible to observe development of site, or its surroundings, within the timeframe. Problem may come to pass when we want to found the same view as the reference picture. Finding the same view is usually solve only by manually.  In this case, photographers must rely to estimation and photographic skill. This method may be inaccurate and for photographer considerably inconvenient. The solution is the refutographic algorithm that using computer vision techniques.

    Transformation-aware Perceptual Image Metric

    Get PDF
    Predicting human visual perception has several applications such as compression, rendering, editing, and retargeting. Current approaches, however, ignore the fact that the human visual system compensates for geometric transformations, e.g., we see that an image and a rotated copy are identical. Instead, they will report a large, false-positive difference. At the same time, if the transformations become too strong or too spatially incoherent, comparing two images gets increasingly difficult. Between these two extrema, we propose a system to quantify the effect of transformations, not only on the perception of image differences but also on saliency and motion parallax. To this end, we first fit local homographies to a given optical flow field, and then convert this field into a field of elementary transformations, such as translation, rotation, scaling, and perspective. We conduct a perceptual experiment quantifying the increase of difficulty when compensating for elementary transformations. Transformation entropy is proposed as a measure of complexity in a flow field. This representation is then used for applications, such as comparison of nonaligned images, where transformations cause threshold elevation, detection of salient transformations, and a model of perceived motion parallax. Applications of our approach are a perceptual level-of-detail for real-time rendering and viewpoint selection based on perceived motion parallax

    PROCEEDINGS OF THE IEEE SPECIAL ISSUE ON APPLICATIONS OF AUGMENTED REALITY ENVIRONMENTS 1 Augmented Reality for Construction Site Monitoring and Documentation

    Get PDF
    Abstract—Augmented Reality allows for an on-site presentation of information that is registered to the physical environment. Applications from civil engineering, which require users to process complex information, are among those which can benefit particularly highly from such a presentation. In this paper, we will describe how to use Augmented Reality (AR) to support monitoring and documentation of construction site progress. For these tasks, the staff responsible usually requires fast and comprehensible access to progress information to enable comparison to the as-built status as well as to as-planned data. Instead of tediously searching and mapping related information to the actual construction site environment, our AR system allows for the access of information right where it is needed. This is achieved by superimposing progress as well as as-planned information onto the user’s view of the physical environment. For this purpose, we present an approach that uses aerial 3D reconstruction to automatically capture progress information and a mobile AR client for on-site visualization. Within this paper, we will describe in greater detail how to capture 3D, how to register the AR system within the physical outdoor environment, how to visualize progress information in a comprehensible way in an AR overlay and how to interact with this kind of information. By implementing such an AR system, we are able to provide an overview about the possibilities and future applications of AR in the construction industry
    corecore