7 research outputs found

    An Efficient Approach to Automatic Generation of Time-lapse Video Sequences

    Get PDF
    Time-lapse video sequences have recently become a highly utilised asset for marketing and advertising, particularly within the field of construction and landscape development. However, the manual generation of these videos, at a quality that can be used for marketing purposes, can be quite time-consuming. In this paper, a novel application for generating time-lapse videos is proposed, which will automatically select the optimal frames for time-lapse video generation, enhance these frames by applying a number of image pre- processing and machine learning techniques such as FAST super-resolution to improve the frames quality, and finally, provide an intuitive user interface to allow users to customise the time-lapse video with company branding. The auto-generated time-lapse videos will use techniques such as Laplacian filtering and temporal smoothing filtering to determine inactivity within the video sequence, classify day or night and, by use of optical character recognition, have the ability to remove unwanted artefacts such as the captured video date and time stamp. The obtained results from the proposed approach produce comparable video sequences to those produced manually, but with the advantage of being generated much faster and not requiring specialised video editing skills to complete

    Time-Lapse Photometric Stereo and Applications

    Full text link
    International audienceThis paper presents a technique to recover geometry from time-lapse sequences of outdoor scenes. We build upon photometric stereo techniques to recover approximate shadowing, shading and normal components allowing us to alter the material and normals of the scene. Previous work in analyzing such images has faced two fundamental difficulties: 1. the illumination in outdoor images consists of time-varying sunlight and skylight, and 2. the motion of the sun is restricted to a near-planar arc through the sky, making surface normal recovery unstable. We develop methods to estimate the reflection component due to skylight illumination. We also show that sunlight directions are usually non-planar, thus making surface normal recovery possible. This allows us to estimate approximate surface normals for outdoor scenes using a single day of data. We demonstrate the use of these surface normals for a number of image editing applications including reflectance, lighting, and normal editing

    Ambient light transfer

    Get PDF
    In the following work we present a system for capturing ambient light in a real scene and recreating it in a room equipped with computer-controlled lamps. We capture incident light in one point of a scene with a simple light probe consisting of a camera and a reflective sphere. The acquired environment map is then transferred to a room where an approximated lighting condition is recreated with multiple LED lamps. To achieve this, we first measure the impact each lamp has on the illumination with a light probe and acquire one image per lamp. A linear combination of these images produces a new environment map, which we can recreate inside the room by setting the intensities of the lamps. We employ Quadratic Programming to find the linear combination that approximates a given environment map best. We speed up the optimization process by downsampling the light probe data, which reduces the dimension of our problem. Our method is fast enough for real-time light transfer and works with all types of linear controlled illuminants. In order to evaluate our method, we designed and constructed an omnidirectional lighting system that can spatially illuminate a room in full-color. We first explore several different configurations of our lighting system, our sampling and our optimization algorithm. We then demonstrate our method's capabilities by capturing static and dynamic ambient light in one location and transferring it into a room.In der folgenden Arbeit stellen wir ein System vor mit dem man das Umgebungslicht an einem Ort aufnehmen und in einen Raum mithilfe computergesteuerter Lampen nachstellen kann. Wir verwenden eine einfache Light Probe, bestehend aus einer Kamera und einer verspiegelten Kugel um das Licht das in einem Punkt einer Szene eingeht aufzunehmen. Dies liefert uns eine Environment Map welche dann in einem Raum durch das Ansteuern von LED-Lampen approximiert wird. Dazu messen wir zuerst die Lichtverteilung jeder einzelnen Lampen im Raum mit einer Light Probe und nehmen ein Bild pro Lampe auf. Durch eine Linearkombination können wir neue Environment Maps erzeugen, welche wir dann im Raum durch Einstellen der Lampenhelligkeit darstellen können. Wir verwenden Quadratic Programming um eine Linearkombination zu finden, welche die zu übertragende Environment Map am besten approximiert. Wir beschleunigen den Optimierungsprozess indem wir die Environment Maps abtasten und somit die Größe des Problems reduzieren. Unsere Methode eignet sich für das Übertragen von Umgebungslicht in Echtzeit und funktioniert mit jeder linear steuerbaren Lichtquelle. Für die Evaluierung haben wir neben einer mobilen Light Probe auch ein Beleuchtungssystem entworfen und aufgebaut. Es handelt sich um eine mobile Konstruktion mit der ein Raum omnidirektional und farbig ausgeleuchtet werden kann. Wir untersuchen zuerst unterschiedliche Konfigurationen unseres Beleuchtungssystems, des Abtastprozesses und der Optimierung. Anschließend demonstrieren wird die Fähigkeiten unseres Systems in dem wir statische und dynamische Umgebungsbeleuchtungen aufnehmen und in einem Raum wiedergeben

    Progressively-refined reflectance functions from natural illumination

    No full text
    In this paper we present a simple, robust and efficient algorithm for estimating reflectance fields (i.e., a description of teh transport of light through a scene) for a fixed viewpoint using images of the scene under known natural illumination. Our algorithm treats the scene as a black-box linear system that transforms an input signal (the incident light) into an output signal (the reflected light). The algorithm is hierarchical- it progressively refines the approximation of the reflectance field with an increasing number of training samples until the required precision is reached. Our method relies on a new representation for reflectance fields. Thsi representation is compact, can be progressively refined, and quickly computes the relighting of scenes with complex illumination. Our representation and the corresponding algorithm allow us to efficiently estimate the reflectance fields of scenes with specular, glossy, refactive and diffuse elements. The method also handles soft and hard shadows, inter-reflections, caustics, and subsurface scattering. We verify our algorithm and representation using two measurement setups and several scenes, including an outdoor view of the city of Cambridg
    corecore