28 research outputs found

    Shadow Estimation Method for "The Episolar Constraint: Monocular Shape from Shadow Correspondence"

    Full text link
    Recovering shadows is an important step for many vision algorithms. Current approaches that work with time-lapse sequences are limited to simple thresholding heuristics. We show these approaches only work with very careful tuning of parameters, and do not work well for long-term time-lapse sequences taken over the span of many months. We introduce a parameter-free expectation maximization approach which simultaneously estimates shadows, albedo, surface normals, and skylight. This approach is more accurate than previous methods, works over both very short and very long sequences, and is robust to the effects of nonlinear camera response. Finally, we demonstrate that the shadow masks derived through this algorithm substantially improve the performance of sun-based photometric stereo compared to earlier shadow mask estimation

    Superpixel Segmentation of Outdoor Webcams to Infer Scene Structure

    Get PDF
    Understanding an outdoor scene’s 3-D structure has applications in several fields, including surveillance and computer graphics. Scene elements’ time-series brightness gives insight to their geometric orientation; and thus the 3-D structure of the overall scene. Previous works have studied the time-series brightness of individual pixels. However, there are limitations with this approach. Pixels are often quite noisy, and can require a lot of memory. This thesis explores the use of superpixels to address these issues. Superpixels, an approach to image segmentation, over-segment a scene but attempt to ensure that each segment lies on only one scene element. Applying superpixels to webcams reduces the effect of noise on pixels’ time-series brightness, and conserves memory by reducing the number of pixel “entities”. This thesis explores methods of solving for a superpixel’s surface normal, and demonstrates that the time at which maximum brightness is achieved serves as a basic indicator of geographic orientation

    Motion denoising with application to time-lapse photography

    Get PDF
    Motions can occur over both short and long time scales. We introduce motion denoising, which treats short-term changes as noise, long-term changes as signal, and re-renders a video to reveal the underlying long-term events. We demonstrate motion denoising for time-lapse videos. One of the characteristics of traditional time-lapse imagery is stylized jerkiness, where short-term changes in the scene appear as small and annoying jitters in the video, often obfuscating the underlying temporal events of interest. We apply motion denoising for resynthesizing time-lapse videos showing the long-term evolution of a scene with jerky short-term changes removed. We show that existing filtering approaches are often incapable of achieving this task, and present a novel computational approach to denoise motion without explicit motion analysis. We demonstrate promising experimental results on a set of challenging time-lapse sequences.United States. National Geospatial-Intelligence Agency (NEGI-1582-04-0004)Shell ResearchUnited States. Office of Naval Research. Multidisciplinary University Research Initiative (Grant N00014-06-1-0734)National Science Foundation (U.S.) (0964004

    ReLiShaft: realistic real-time light shaft generation taking sky illumination into account

    Get PDF
    © 2018 The Author(s) Rendering atmospheric phenomena is known to have its basis in the fields of atmospheric optics and meteorology and is increasingly used in games and movies. Although many researchers have focused on generating and enhancing realistic light shafts, there is still room for improvement in terms of both qualification and quantification. In this paper, a new technique, called ReLiShaft, is presented to generate realistic light shafts for outdoor rendering. In the first step, a realistic light shaft with respect to the sun position and sky colour in any specific location, date and time is constructed in real-time. Then, Hemicube visibility-test radiosity is employed to reveal the effect of a generated sky colour on environments. Two different methods are considered for indoor and outdoor rendering, ray marching based on epipolar sampling for indoor environments, and filtering on regular epipolar of z-partitioning for outdoor environments. Shadow maps and shadow volumes are integrated to consider the computational costs. Through this technique, the light shaft colour is adjusted according to the sky colour in any specific location, date and time. The results show different light shaft colours in different times of day in real-time
    corecore