102 research outputs found
Computational rim illumination of dynamic subjects using aerial robots
Lighting plays a major role in photography. Professional photographers use elaborate installations to light their subjects and achieve sophisticated styles. However, lighting moving subjects performing dynamic tasks presents significant challenges and requires expensive manual intervention. A skilled additional assistant might be needed to reposition lights as the subject changes pose or moves, and the extra logistics significantly raises costs and time. The associated latencies as the assistant lights the subject, and the communication required from the photographer to achieve optimum lighting could mean missing a critical shot.
We present a new approach to lighting dynamic subjects where an aerial robot equipped with a portable light source lights the subject to automatically achieve a desired lighting effect. We focus on rim lighting, a particularly challenging effect to achieve with dynamic subjects, and allow the photographer to specify a required rim width. Our algorithm processes the images from the photographer׳s camera and provides necessary motion commands to the aerial robot to achieve the desired rim width in the resulting photographs. With an indoor setup, we demonstrate a control approach that localizes the aerial robot with reference to the subject and tracks the subject to achieve the necessary motion. In addition to indoor experiments, we perform open-loop outdoor experiments in a realistic photo-shooting scenario to understand lighting ergonomics. Our proof-of-concept results demonstrate the utility of robots in computational lighting
Edge-preserving Multiscale Image Decomposition based on Local Extrema
We propose a new model for detail that inherently captures oscillations, a key property that distinguishes textures from individual edges. Inspired by techniques in empirical data analysis and morphological image analysis, we use the local extrema of the input image to extract information about oscillations: We define detail as oscillations between local minima and maxima. Building on the key observation that the spatial scale of oscillations are characterized by the density of local extrema, we develop an algorithm for decomposing images into multiple scales of superposed oscillations.
Current edge-preserving image decompositions assume image detail to be low contrast variation. Consequently they apply filters that extract features with increasing contrast as successive layers of detail. As a result, they are unable to distinguish between high-contrast, fine-scale features and edges of similar contrast that are to be preserved. We compare our results with existing edge-preserving image decomposition algorithms and demonstrate exciting applications that are made possible by our new notion of detail
Understanding camera trade-offs through a Bayesian analysis of light field projections - A revision
Computer vision has traditionally focused on extracting structure,such as depth, from images acquired using thin-lens or pinholeoptics. The development of computational imaging is broadening thisscope; a variety of unconventional cameras do not directly capture atraditional image anymore, but instead require the jointreconstruction of structure and image information. For example, recentcoded aperture designs have been optimized to facilitate the jointreconstruction of depth and intensity. The breadth of imaging designs requires new tools to understand the tradeoffs implied bydifferent strategies. This paper introduces a unified framework for analyzing computational imaging approaches.Each sensor element is modeled as an inner product over the 4D light field.The imaging task is then posed as Bayesian inference: giventhe observed noisy light field projections and a new prior on light field signals, estimate the original light field. Under common imaging conditions, we compare theperformance of various camera designs using 2D light field simulations. Thisframework allows us to better understand the tradeoffs of each camera type and analyze their limitations
A User Study Comparing 3D Modeling with Silhouettes and Google SketchUp
We describe a user study comparing 3D Modeling with Silhouettes and Google SketchUp. In the user study, ten users were asked to create 3D models of three different objects, using either 3D Modeling with Silhouettes or Google SketchUp. Ten different users were then asked to rank images of the models produced by the first group. We show that the models made with 3D Modeling with Silhouettes were ranked significantly higher on average than those made with Google SketchUp
Computational Re-Photography
Rephotographers aim to recapture an existing photograph from the same viewpoint. A historical photograph paired with a well-aligned modern rephotograph can serve as a remarkable visualization of the passage of time. However, the task of rephotography is tedious and often imprecise, because reproducing the viewpoint of the original photograph is challenging. The rephotographer must disambiguate between the six degrees of freedom of 3D translation and rotation, and the confounding similarity between the effects of camera zoom and dolly. We present a real-time estimation and visualization technique for rephotography that helps users reach a desired viewpoint during capture. The input to our technique is a reference image taken from the desired viewpoint. The user moves through the scene with a camera and follows our visualization to reach the desired viewpoint. We employ computer vision techniques to compute the relative viewpoint difference. We guide 3D movement using two 2D arrows. We demonstrate the success of our technique by rephotographing historical images and conducting user studies
Anisotropic noise
Programmable graphics hardware makes it possible to generate procedural noise textures on the fly for interactive rendering. However, filtering and antialiasing procedural noise involves a tradeoff between aliasing artifacts and loss of detail. In this paper we present a technique, targeted at interactive applications, that provides high-quality anisotropic filtering for noise textures. We generate noise tiles directly in the frequency domain by partitioning the frequency domain into oriented subbands. We then compute weighted sums of the subband textures to accurately approximate noise with a desired spectrum. This allows us to achieve high-quality anisotropic filtering. Our approach is based solely on 2D textures, avoiding the memory overhead of techniques based on 3D noise tiles. We devise a technique to compensate for texture distortions to generate uniform noise on arbitrary meshes. We develop a GPU-based implementation of our technique that achieves similar rendering performance as state-of-the-art algorithms for procedural noise. In addition, it provides anisotropic filtering and achieves superior image quality.National Science Foundation (U.S.) (CAREER Award 0447561)Microsoft Research (New Faculty Fellowship)Alfred P. Sloan Foundation (Fellowship
Understanding camera trade-offs through a Bayesian analysis of light field projections
Computer vision has traditionally focused on extracting structure,such as depth, from images acquired using thin-lens or pinhole optics. The development of computational imaging is broadening this scope; a variety of unconventional cameras do not directly capture a traditional image anymore, but instead require the joint reconstruction of structure and image information. For example, recent coded aperture designs have been optimized to facilitate the joint reconstruction of depth and intensity. The breadth of imaging designs requires new tools to understand the tradeoffs implied by different strategies.This paper introduces a unified framework for analyzing computational imagingapproaches. Each sensor element is modeled as an inner product over the 4D light field. The imaging task is then posed as Bayesian inference: given the observed noisy light field projections and a new prior on light field signals, estimatethe original light field. Under common imaging conditions, we compare the performance of various camera designs using 2D light field simulations. This framework allows us to better understand the tradeoffs of each camera type andanalyze their limitations
Real-Time Volumetric Shadows using 1D Min-Max Mipmaps
Light scattering in a participating medium is responsible for several important effects we see in the natural world. In the presence of occluders, computing single scattering requires integrating the illumination scattered towards the eye along the camera ray, modulated by the visibility towards the light at each point. Unfortunately, incorporating volumetric shadows into this integral, while maintaining real-time performance, remains challenging.
In this paper we present a new real-time algorithm for computing volumetric shadows in single-scattering media on the GPU. This computation requires evaluating the scattering integral over the intersections of camera rays with the shadow map, expressed as a 2D height field. We observe that by applying epipolar rectification to the shadow map, each camera ray only travels through a single row of the shadow map (an epipolar slice), which allows us to find the visible segments by considering only 1D height fields. At the core of our algorithm is the use of an acceleration structure (a 1D minmax mipmap) which allows us to quickly find the lit segments for all pixels in an epipolar slice in parallel. The simplicity of this data structure and its traversal allows for efficient implementation using only pixel shaders on the GPU
- …