2 research outputs found

    Efficient global illumination for dynamic scenes

    Get PDF
    The production of high quality animations which feature compelling lighting effects is computationally a very heavy task when traditional rendering approaches are used where each frame is computed separately. The fact that most of the computation must be restarted from scratch for each frame leads to unnecessary redundancy. Since temporal coherence is typically not exploited, temporal aliasing problems are also more difficult to address. Many small errors in lighting distribution cannot be perceived by human observers when they are coherent in temporal domain. However, when such a coherence is lost, the resulting animations suffer from unpleasant flickering effects. In this thesis, we propose global illumination and rendering algorithms, which are designed specifically to combat those problems. We achieve this goal by exploiting temporal coherence in the lighting distribution between the subsequent animation frames. Our strategy relies on extending into temporal domain wellknown global illumination and rendering techniques such as density estimation path tracing, photon mapping, ray tracing, and irradiance caching, which have been originally designed to handle static scenes only. Our techniques mainly focus on the computation of indirect illumination, which is the most expensive part of global illumination modelling.Die Erstellung von hochqualitativen 3D-Animationen mit anspruchsvollen Lichteffekten ist für traditionelle Renderinganwendungen, bei denen jedes Bild separat berechnet wird, sehr aufwendig. Die Tatsache jedes Bild komplett neu zu berechnen führt zu unnötiger Redundanz. Wenn temporale Koherenz vernachlässigt wird, treten unter anderem auch schwierig zu behandelnde temporale Aliasingprobleme auf. Viele kleine Fehler in der Beleuchtungsberechnung eines Bildes können normalerweise nicht wahr genommen werden. Wenn jedoch die temporale Koherenz zwischen aufeinanderfolgenden Bildern fehlt, treten störende Flimmereffekte auf. In dieser Arbeit stellen wir globale Beleuchtungsalgorithmen vor, die die oben genannten Probleme behandeln. Dies erreichen wir durch Ausnutzung von temporaler Koherenz zwischen aufeinanderfolgenden Einzelbildern einer Animation. Unsere Strategy baut auf die klassischen globalen Beleuchtungsalgorithmen wie "Path tracing", "Photon Mapping" und "Irradiance Caching" auf und erweitert diese in die temporale Domäne. Dabei beschränken sich unsereMethoden hauptsächlich auf die Berechnung indirekter Beleuchtung, welche den zeitintensivsten Teil der globalen Beleuchtungsberechnung darstellt

    Hairstyle modelling based on a single image.

    Get PDF
    Hair is an important feature to form character appearance in both film and video game industry. Hair grooming and combing for virtual characters was traditionally an exclusive task for professional designers because of its requirements for both technical manipulation and artistic inspiration. However, this manual process is time-consuming and further limits the flexibility of customised hairstyle modelling. In addition, it is hard to manipulate virtual hairstyle due to intrinsic hair shape. The fast development of related industrial applications demand an intuitive tool for efficiently creating realistic hairstyle for non-professional users. Recently, image-based hair modelling has been investigated for generating realistic hairstyle. This thesis demonstrates a framework Struct2Hair that robustly captures a hairstyle from a single portrait input. Specifically, the 2D hair strands are traced from the input with the help of image processing enhancement first. Then the 2D hair sketch of a hairstyle on a coarse level is extracted from generated 2D hair strands by clustering. To solve the inherently ill-posed single-view reconstruction problem, a critical hair shape database has been built by analysing an existing hairstyle model database. The critical hair shapes is a group of hair strands which possess similar shape appearance and close space location. Once the prior shape knowledge is prepared, the hair shape descriptor (HSD) is introduced to encode the structure of the target hairstyle. The HSD is constructed by retrieving and matching corresponding critical hair shape centres in the database. The full-head hairstyle is reconstructed by uniformly diffusing the hair strands on the scalp surface under the guidance of extracted HSD. The produced results are evaluated and compared with the state-of-the-art image based hair modelling methods. The findings of this thesis lead to some promising applications such as blending hairstyles to populate novel hair model, editing hairstyle (adding fringe hair, curling and cutting/extending hairstyle) and a case study of Bas-relief hair modelling on pre-processed hair images
    corecore