92 research outputs found

    Virtual Rephotography: Novel View Prediction Error for 3D Reconstruction

    Full text link
    The ultimate goal of many image-based modeling systems is to render photo-realistic novel views of a scene without visible artifacts. Existing evaluation metrics and benchmarks focus mainly on the geometric accuracy of the reconstructed model, which is, however, a poor predictor of visual accuracy. Furthermore, using only geometric accuracy by itself does not allow evaluating systems that either lack a geometric scene representation or utilize coarse proxy geometry. Examples include light field or image-based rendering systems. We propose a unified evaluation approach based on novel view prediction error that is able to analyze the visual quality of any method that can render novel views from input images. One of the key advantages of this approach is that it does not require ground truth geometry. This dramatically simplifies the creation of test datasets and benchmarks. It also allows us to evaluate the quality of an unknown scene during the acquisition and reconstruction process, which is useful for acquisition planning. We evaluate our approach on a range of methods including standard geometry-plus-texture pipelines as well as image-based rendering techniques, compare it to existing geometry-based benchmarks, and demonstrate its utility for a range of use cases.Comment: 10 pages, 12 figures, paper was submitted to ACM Transactions on Graphics for revie

    Computational Re-Photography

    Get PDF
    Rephotographers aim to recapture an existing photograph from the same viewpoint. A historical photograph paired with a well-aligned modern rephotograph can serve as a remarkable visualization of the passage of time. However, the task of rephotography is tedious and often imprecise, because reproducing the viewpoint of the original photograph is challenging. The rephotographer must disambiguate between the six degrees of freedom of 3D translation and rotation, and the confounding similarity between the effects of camera zoom and dolly. We present a real-time estimation and visualization technique for rephotography that helps users reach a desired viewpoint during capture. The input to our technique is a reference image taken from the desired viewpoint. The user moves through the scene with a camera and follows our visualization to reach the desired viewpoint. We employ computer vision techniques to compute the relative viewpoint difference. We guide 3D movement using two 2D arrows. We demonstrate the success of our technique by rephotographing historical images and conducting user studies

    Laser speckle photography for surface tampering detection

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2012.Cataloged from PDF version of thesis.Includes bibliographical references (p. 59-61).It is often desirable to detect whether a surface has been touched, even when the changes made to that surface are too subtle to see in a pair of before and after images. To address this challenge, we introduce a new imaging technique that combines computational photography and laser speckle imaging. Without requiring controlled laboratory conditions, our method is able to detect surface changes that would be indistinguishable in regular photographs. It is also mobile and does not need to be present at the time of contact with the surface, making it well suited for applications where the surface of interest cannot be constantly monitored. Our approach takes advantage of the fact that tiny surface deformations cause phase changes in reflected coherent light which alter the speckle pattern visible under laser illumination. We take before and after images of the surface under laser light and can detect subtle contact by correlating the speckle patterns in these images. A key challenge we address is that speckle imaging is very sensitive to the location of the camera, so removing and reintroducing the camera requires high-accuracy viewpoint alignment. To this end, we use a combination of computational rephotography and correlation analysis of the speckle pattern as a function of camera translation. Our technique provides a reliable way of detecting subtle surface contact at a level that was previously only possible under laboratory conditions. With our system, the detection of these subtle surface changes can now be brought into the wild.by YiChang Shih.S.M

    Hands on Media History:A New Methodology in the Humanities and Social Sciences

    Get PDF

    Painting-to-3D Model Alignment Via Discriminative Visual Elements

    Get PDF
    International audienceThis paper describes a technique that can reliably align arbitrary 2D depictions of an architectural site, including drawings, paintings and historical photographs, with a 3D model of the site. This is a tremendously difficult task as the appearance and scene structure in the 2D depictions can be very different from the appearance and geometry of the 3D model, e.g., due to the specific rendering style, drawing error, age, lighting or change of seasons. In addition, we face a hard search problem: the number of possible alignments of the painting to a large 3D model, such as a partial reconstruction of a city, is huge. To address these issues, we develop a new compact representation of complex 3D scenes. The 3D model of the scene is represented by a small set of discriminative visual elements that are automatically learnt from rendered views. Similar to object detection, the set of visual elements, as well as the weights of individual features for each element, are learnt in a discriminative fashion. We show that the learnt visual elements are reliably matched in 2D depictions of the scene despite large variations in rendering style (e.g. watercolor, sketch, historical photograph) and structural changes (e.g. missing scene parts, large occluders) of the scene. We demonstrate an application of the proposed approach to automatic re-photography to find an approximate viewpoint of historical paintings and photographs with respect to a 3D model of the site. The proposed alignment procedure is validated via a human user study on a new database of paintings and sketches spanning several sites. The results demonstrate that our algorithm produces significantly better alignments than several baseline methods

    Speciesism | Ageism | Racism

    Get PDF
    Speciesism | Ageism | Racism (SAR) is a generative cinematic artwork stemming from the millennia-old practice of mask making and laying claim to the fundamental richness of diversity. SAR generates sequences of masks from photos of people and animals without bias, imbued meaning or particular intent, leaving all interpretations and assumptions to the audience. SAR is aesthetically rooted in traditional folklore and the worldwide popular art of mask-making, in the concepts of “loop” and metric montage. Conceptually, SAR thrives in the intersectionality of postcolonial theory, feminist and anti-discrimination studies, as well as animal rights movements, policies and practices. By stripping away the ability to consistently identify species, age, race, gender or sexual orientation, the artwork allows for a disruptive aesthetic appreciation, which confronts the ideology and politics of group superiority. SAR delivers a participatory, hypnotic, rhythmic and generative audio-visual experience, charged with an anti-discriminatory message countering speciesism, ageism and racism. Speciesism | Ageism | Racism can be enjoyed in its on-line pre-calculated version at https://pedroveiga.com/sar-speciesism-ageism-racism/info:eu-repo/semantics/publishedVersio

    Photographic powers : Helsinki Photomedia 2014

    Get PDF
    Helsinki Photomedia is a biennial international conference of Photography Studies established in 2012. It was created to fill a void: there was no regular, international forum for photography studies, like Crossroads is for Cultural Studies or ECREA for Media Studies. This was surprising because there had been lots of new activity in the field of photography studies all over the world. In its current state of rapid transformation and diversification photography showed rich cultural potential, and photography research was gaining new importance. Three new referee journals were launched since 2008: Photographies, Photography & Culture, Philosophy of Photography. Books and articles did abound, and the general high tide of photography definitely required new thinking, new methods and new theories. Helsinki Photomedia started in 2012 with a broad theme: Images in Circulation. Over 140 participants coming from 23 countries proved that there really existed a need for a new international venue for presenting and discussing photography research. The three keynote speakers of this conference were Ariella Azoulay, David Bate and Charlotte Cotton. The variety of topics covered in the first conference was impressive. Terms such as 'expanded image', 'Photography 2.0', 'digital ethos' and 'collaborative turn in contemporary photography' appeared in some papers and presentations, indicating a need for a general diagnosis of the current shift. Some raised more specific questions about current developments in photography. There were analyses of metadata, affordance, curating and self-publishing, and so on. New modes of research, such as 'artistic research', were important elements of the first conference. Photographies published a special issue of the conference (Vol. 6 Issue 1, 2013). The second Helsinki Photomedia in March 2014 was run under the theme Photographic Powers. Again, there were around 130 participants, some for the second time. The keynote speakers were Paul Frosh, Jorge Ribalta and Joanna Zylinska. This publication is based on the papers and presentations delivered in the 2014 conference. After a strict referee process 14 articles were selected for publishing. Meanwhile the preparations for the third Helsinki Photomedia conference in March 2016 are well underway. The theme is Photographic Agencies and Materialities and the keynote speakers are Geoffrey Batchen, Annika von Hausswolff and Liz Wells

    Generative video art

    Get PDF
    Generative art is historically and widely used for the production of abstract images and animations, each frame corresponding to a generation or iteration of the generative system, which runs within the aesthetic boundaries defined by its author. But rather than being limited to image or sound synthesis, generative systems can also manipulate video samples and still images from external sources, and include vectors that can be mapped to the concepts of shot, sequence, rhythm and montage. Furthermore, generative systems need not be limited to the visual plane and can also render audio, either through sound synthesis or by manipulating sound samples. And in this case, since the output is a constant and uninterrupted audio-visual stream, is it not possible to speak of generative video art, as it becomes indistinguishable from its modern-day video art digital counterparts? Within this perspective, this article traces back the historical roots of generative video art, and proposes a theoretical model for generative video art systems, as a creative intersection of two artistic genres, often seen as disjoint.info:eu-repo/semantics/publishedVersio

    Um Sistema para evolução espácio-temporal de imagens

    Get PDF
    Atualmente os computadores, os telemóveis mais recentes e as câmaras fotográficas têm o poder de capturar momentos, e estes momentos são gravados em fotografias. Hoje em dia são tiradas imensas fotografias em todas as partes do mundo e com isto surgem alguns problemas, tais como organizar e visualizar as fotografias de forma a transmitir o sentimento e o momento vivido. Procura-se ainda como inovar e dar uma perceção evolutiva de estruturas ou de pessoas captadas nas fotografias. É neste contexto que esta dissertação se insere. A dissertação tem como principal objetivo, oferecer uma ferramenta que consiga proporcionar uma viagem cronológica ao longo do tempo, do passado até ao presente, dado um conjunto de imagens de várias épocas. A ferramenta, chamada Evolapse, permite visualizar uma representação tridimensional de um conjunto de imagens, podendo visualizar imagens de várias épocas para a mesma localização geográfica. A Evolapse processa automaticamente comparações entre imagens e estabelece relações entre as mesmas. Também oferece um método semiautomático para criar as relações. Todas as relações estabelecidas automaticamente e criadas, recorrendo ao método semi-automático, podem ser editadas. A ferramenta foi desenvolvida no âmbito do projeto LX Conventos e os seus resultados são testados no website do projeto. De modo a testar os resultados, a ferramenta produz documentos descritivos que contêm toda a informação necessária para reconstruir a representação tridimensional. O website do projeto LX Conventos ficará disponível ao público, dando a possibilidade a múltiplos utilizadores de visualizarem os resultados gerados pela ferramenta.Fundação da Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa, com a referência PTDC/CPCHAT/4703/2012, financiado por fundos nacionais através da FCT/MEC(PIDDAC
    corecore