17 research outputs found

    Expressive rendering of animated hair

    Get PDF
    National audienceHair simulation is one of the crucial elements of a character realism in video games as well as animated movies. It is also one of the most challenging because of its complex nature. A simulation model needs to be able to handle hair fibers or wisp interaction while keeping the desired rendering style. During the past few years intensive work has been done in this field. Most of the authors have tried to render and animate hair as realistically as possible. Impressive results have been obtained and computation times have been reduced. Nevertheless this level of realism is not always desired by the animator. Most animated characters are represented with a hair model only composed of a few hair wisps or clumps in other words the individual hair fibers are not even accounted for. Only little work has been done to animate and render non-photorealistic hair for cel-characters1 . The goal of this work is to design an expressive rendering technique for a realistic animation of hair. This project is a part of an ANR research program for a joint industrial project with two production studios: Neomis Animation and BeeLight, two other INRIA project-teams: Bipop and Evasion and a CNRS lab (Institut Jean Le Rond d'Alembert de l'Université Pierre et Marie Curie). The aim of this project is to provide hair rendering and animating tools for movie making. According to the discussions we had with artists from Neomis studio, it appears that an animator will expect realism of hair motion combined with an expressive rendering technique that is dedicated to animated movies

    Microscopic modulation and analysis of islets of Langerhans in living zebrafish larvae

    Get PDF
    Microscopic analysis of molecules and physiology in living cells and systems is a powerful tool in life sciences. While in vivo subcellular microscopic analysis of healthy and diseased human organs remains impossible, zebrafish larvae allow studying pathophysiology of many organs using in vivo microscopy. Here, we review the potential of the larval zebrafish pancreas in the context of islets of Langerhans and Type 1 diabetes. We highlight the match of zebrafish larvae with the expanding toolbox of fluorescent probes that monitor cell identity, fate and/or physiology in real time. Moreover, fast and efficient modulation and localization of fluorescence at a subcellular level, through fluorescence microscopy, including confocal and light sheet (single plane illumination) microscopes tailored to in vivo larval research, is addressed. These developments make the zebrafish larvae an extremely powerful research tool for translational research. We foresee that living larval zebrafish models will replace many cell line-based studies in understanding the contribution of molecules, organelles and cells to organ pathophysiology in whole organisms.ImPhys/Microscopy Instrumentation & Technique

    Expressive rendering of animated hair

    No full text
    National audienceHair simulation is one of the crucial elements of a character realism in video games as well as animated movies. It is also one of the most challenging because of its complex nature. A simulation model needs to be able to handle hair fibers or wisp interaction while keeping the desired rendering style. During the past few years intensive work has been done in this field. Most of the authors have tried to render and animate hair as realistically as possible. Impressive results have been obtained and computation times have been reduced. Nevertheless this level of realism is not always desired by the animator. Most animated characters are represented with a hair model only composed of a few hair wisps or clumps in other words the individual hair fibers are not even accounted for. Only little work has been done to animate and render non-photorealistic hair for cel-characters1 . The goal of this work is to design an expressive rendering technique for a realistic animation of hair. This project is a part of an ANR research program for a joint industrial project with two production studios: Neomis Animation and BeeLight, two other INRIA project-teams: Bipop and Evasion and a CNRS lab (Institut Jean Le Rond d'Alembert de l'Université Pierre et Marie Curie). The aim of this project is to provide hair rendering and animating tools for movie making. According to the discussions we had with artists from Neomis studio, it appears that an animator will expect realism of hair motion combined with an expressive rendering technique that is dedicated to animated movies

    Progressive Medial Axis Filtration

    No full text
    International audienceThe Scale Axis Transform provides a parametric simplification of the Medial Axis of a 3D shape which can be seen as a hierarchical description. However, this powerful shape analysis method has a significant computational cost, requiring several minutes for a single scale on a mesh of few thousands vertices. Moreover, the scale axis can be artificially complexified at large scales, introducing new topological structures in the simplified model. In this paper, we propose a progressive medial axis simplification method inspired from surface optimization techniques which retains the geometric intuition of the scale axis transform. We compute a hierarchy of simplified medial axes by means of successive edge-collapses of the input medial axis. These operations prevent the creation of artificial tunnels that can occur in the original scale axis transform. As a result, our progressive simplification approach allows to compute the complete hierarchy of scales in a few seconds on typical input medial axes. We show how this variation of the scale axis transform impacts the resulting medial structure.</p

    Analysis of a Physically Realistic Film Grain Model, and a Gaussian Film Grain Synthesis Algorithm

    No full text
    International audienceFilm grain is a highly valued characteristic of analog images, thus realistic digital film grain synthesis is an important objective for many modern photographers and film-makers. We carry out a theoretical analysis of a physically realistic film grain model, based on a Boolean model, and derive expressions for the expected value and covariance of the film grain texture. We approximate these quantities using a Monte Carlo simulation, and use them to propose a film grain synthesis algorithm based on Gaussian textures. With numerical and visual experiments, we demonstrate the correctness and subjective qualities of the proposed algorith

    VoxMorph: 3-Scale Freeform Deformation of Large Voxel Grids

    No full text
    International audience<p>We propose VoxMorph, a new interactive freeform deformation tool for high-resolution voxel grids. Our system exploits cages for high-level deformation control. We tackle the scalability issue by introducing a new 3-scale deformation algorithm composed of a high quality as-rigid-as possible deformation at coarse scale, a quasi-conformal space deformation at mid-scale and a new deformation-adaptive local linear technique at fine scale. The two first scales are applied interactively on a visualization envelope, while the complete full resolution deformation is computed as a post-process after the interactive session, resulting in a high-resolution voxel grid containing the deformed model. We tested our system on various real world datasets and demonstrate that our approach offers a good balance between performance and quality.</p

    Persistence Atlas for Critical Point Variability in Ensembles

    No full text
    International audienceThis paper presents a new approach for the visualization and analysis of the spatial variability of features of interest represented by critical points in ensemble data. Our framework, called Persistence Atlas, enables the visualization of the dominant spatial patterns of critical points, along with statistics regarding their occurrence in the ensemble. The persistence atlas represents in the geometrical domain each dominant pattern in the form of a confidence map for the appearance of critical points. As a by-product, our method also provides 2-dimensional layouts of the entire ensemble, highlighting the main trends at a global level. Our approach is based on the new notion of Persistence Map, a measure of the geometrical density in critical points which leverages the robustness to noise of topological persistence to better emphasize salient features. We show how to leverage spectral embedding to represent the ensemble members as points in a low-dimensional Euclidean space, where distances between points measure the dissimilarities between critical point layouts and where statistical tasks, such as clustering, can be easily carried out. Further, we show how the notion of mandatory critical point can be leveraged to evaluate for each cluster confidence regions for the appearance of critical points. Most of the steps of this framework can be trivially parallelized and we show how to efficiently implement them. Extensive experiments demonstrate the relevance of our approach. The accuracy of the confidence regions provided by the persistence atlas is quantitatively evaluated and compared to a baseline strategy using an off-the-shelf clustering approach. We illustrate the importance of the persistence atlas in a variety of real-life datasets, where clear trends in feature layouts are identified and analyzed. We provide a lightweight VTK-based C++ implementation of our approach that can be used for reproduction purposes

    Realistic Film Grain Rendering

    No full text
    International audienceFilm grain is the unique texture which results from the silver halide based analog photographic process. Film emulsions are made up of microscopic photo-sensitive silver grains, and the fluctuating density of these grains leads to what is known as film grain. This texture is valued by photographers and film directors for its artistic value. We present two implementations of a film grain rendering algorithm based on a physically realistic film grain model. The rendering algorithm uses a Monte Carlo simulation to determine the value of each output rendered pixel. A significant advantage of using this model is that the images can be rendered at any resolution, so that arbitrary zoom factors are possible, even to the point where the individual grains can be observed. We provide a method to choose the best implementation automatically, with respect to execution time. The C++ code for this work is available, as well as an online demo http://dev.ipol.im/~nfaraj/ipol_demo/film_grain/
    corecore