15 research outputs found

    Statistical analysis and transfer of coarse-grain pictorial style

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2005.Includes bibliographical references (p. 96-103).We show that image statistics can be used to analyze and transfer simple notions of pictorial style of paintings and photographs. We characterize the frequency content of pictorial styles, such as multi-scale, spatial variations, and anisotropy properties, using a multi-scale and oriented decomposition, the steerable pyramid. We show that the average of the absolute steerable coefficients as a function of scale characterizes simple notions of "look" or style. We extend this approach to account for image non-stationarity, that is, we capture and transfer the spatial variations of multi-scale content. In addition, we measure the standard deviation of the steerable coefficients across orientation, which characterizes image anisotropy and permits analysis and transfer of oriented structures. We focus on the statistical features that can be transferred. Since we couple analysis and transfer, our statistical model and transfer tools are consistent with the visual effect of pictorial styles. For this reason, our technique leads to more intuitive manipulation and interpolation of pictorial styles. In addition, our statistical model can be used to classify and retrieve images by style.by Soonmin Bae.S.M

    Computational Re-Photography

    Get PDF
    Rephotographers aim to recapture an existing photograph from the same viewpoint. A historical photograph paired with a well-aligned modern rephotograph can serve as a remarkable visualization of the passage of time. However, the task of rephotography is tedious and often imprecise, because reproducing the viewpoint of the original photograph is challenging. The rephotographer must disambiguate between the six degrees of freedom of 3D translation and rotation, and the confounding similarity between the effects of camera zoom and dolly. We present a real-time estimation and visualization technique for rephotography that helps users reach a desired viewpoint during capture. The input to our technique is a reference image taken from the desired viewpoint. The user moves through the scene with a camera and follows our visualization to reach the desired viewpoint. We employ computer vision techniques to compute the relative viewpoint difference. We guide 3D movement using two 2D arrows. We demonstrate the success of our technique by rephotographing historical images and conducting user studies

    Analysis and transfer of photographic viewpoint and appearance

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2009.Cataloged from PDF version of thesis.Includes bibliographical references (p. 123-131).To make a compelling photograph, photographers need to carefully choose the subject and composition of a picture, to select the right lens and viewpoint, and to make great efforts with lighting and post-processing to arrange the tones and contrast. Unfortunately, such painstaking work and advanced skill is out of reach for casual photographers. In addition, for professional photographers, it is important to improve workflow efficiency. The goal of our work is to allow users to achieve a faithful viewpoint for rephotography and a particular appearance with ease and speed. To this end, we analyze and transfer properties of a model photo to a new photo. In particular, we transfer the viewpoint of a reference photo to enable rephotography. In addition, we transfer photographic appearance from a model photo to a new input photo. In this thesis,we present two contributions that transfer photographic view and look using model photographs and one contribution that magnifies existing defocus given a single photo. First, we address the challenge of viewpoint matching for rephotography. Our interactive, computer-vision-based technique helps users match the viewpoint of a reference photograph at capture time. Next, we focus on the tonal aspects of photographic look using post-processing. Users just need to provide a pair of photos, an input and a model, and our technique automatically transfers the look from the model to the input. Finally, we magnify defocus given a single image. We analyze the existing defocus in the input image and increase the amount of defocus present in out-of focus regions.(cont.) Our computational techniques increase users' performance and efficiency by analyzing and transferring the photographic characteristics of model photographs. We envision that this work will enable cameras and post-processing to embed more computation with a simple and intuitive interaction.by Soonmin Bae.Ph.D

    EUROGRAPHICS 2007 / D. Cohen-Or and P. Slavík (Guest Editors) Defocus Magnification

    No full text
    (a) input (b) defocus map (c) our result with magnified defocus Figure 1: Our technique magnifies defocus given a single image. Our defocus map characterizes blurriness at edges. This enables shallow depth of field effects by magnifying existing defocus. The input photo was taken by a Canon PowerShot A80, a point-and-shoot camera with a sensor size of 7.18 × 5.32 mm, and a 7.8 mm lens at f/2.8. A blurry background due to shallow depth of field is often desired for photographs such as portraits, but, unfortunately, small point-and-shoot cameras do not permit enough defocus because of the small diameter of their lenses. We present an image-processing technique that increases the defocus in an image to simulate the shallow depth of field of a lens with a larger aperture. Our technique estimates the spatially-varying amount of blur over the image, and then uses a simple image-based technique to increase defocus. We first estimate the size of the blur kernel at edges and then propagate this defocus measure over the image. Using our defocus map, we magnify the existing blurriness, which means that we blur blurry regions and keep sharp regions sharp. In contrast to more difficult problems such as depth from defocus, we do not require precise depth estimation and do not need to disambiguate textureless regions. Categories and Subject Descriptors (according to ACM CCS): I.3.8 [Computer Graphics]: Applications 1

    Two-scale tone management for photographic look

    No full text
    (a) input (b) sample possible renditions: bright and sharp, gray and highly detailed, and contrasted, smooth and grainy Figure 1: This paper describes a technique to enhance photographs. We equip the user with powerful filters that control several aspects of an image such as its tonal balance and its texture. We make it possible for anyone to explore various renditions of a scene in a few clicks. We provide an effective approach to æsthetic choices, easing the creation of compelling pictures. We introduce a new approach to tone management for photographs. Whereas traditional tone-mapping operators target a neutral and faithful rendition of the input image, we explore pictorial looks by controlling visual qualities such as the tonal balance and the amount of detail. Our method is based on a two-scale non-linear decomposition of an image. We modify the different layers based on their histograms and introduce a technique that controls the spatial variation of detail. We introduce a Poisson correction that prevents potential gradient reversal and preserves detail. In addition to directly controlling the parameters, the user can transfer the look of a model photograph to the picture being edited

    Image-based querying of urban knowledge databases

    No full text
    We extend recent automated computer vision algorithms to reconstruct the global three-dimensional structures for photos and videos shot at fixed points in outdoor city environments. Mosaics of digital stills and embedded videos are georegistered by matching a few of their 2D features with 3D counterparts in aerial ladar imagery. Once image planes are aligned with world maps, abstract urban knowledge can propagate from the latter into the former. We project geotagged annotations from a 3D map into a 2D video stream and demonstrate their tracking buildings and streets in a clip with significant panning motion. We also present an interactive tool which enables users to select city features of interest in video frames and retrieve their geocoordinates and ranges. Implications of this work for future augmented reality systems based upon mobile smart phones are discussed.Departmwent of the Air Force (Air Force Contract No. FA8721-05-C-0002
    corecore