180 research outputs found

    Real-Time Glints Rendering with Prefiltered Discrete Stochastic Microfacets

    Get PDF
    International audienceMany real-life materials have a sparkling appearance. Examples include metallic paints, sparkling fabrics, snow. Simulating these sparkles is important for realistic rendering but expensive. As sparkles come from small shiny particles reflecting light into a specific direction, they are very challenging for illumination simulation. Existing approaches use a 4-dimensional hierarchy, searching for light-reflecting particles simultaneously in space and direction. The approach is accurate, but extremely expensive. A separable model is much faster, but still not suitable for real-time applications. The performance problem is even worse when illumination comes from environment maps, as they require either a large samplecount per pixel or prefiltering. Prefiltering is incompatible with the existing sparkle models, due to the discrete multi-scale representation. In this paper, we present a GPU friendly, prefiltered model for real-time simulation of sparkles and glints. Our method simulates glints under both environment maps and point light sources in real-time, with an added cost of just 10 ms per frame with full high definition resolution. Editing material properties requires extra computations but is still real-time, with an added cost of 10 ms per frame

    Radiance Scaling for Versatile Surface Enhancement

    Get PDF
    International audienceWe present a novel technique called Radiance Scaling for the depiction of surface shape through shading. It adjusts reflected light intensities in a way dependent on both surface curvature and material characteristics. As a result, diffuse shading or highlight variations become correlated to surface feature variations, enhancing surface concavities and convexities. This approach is more versatile compared to previous methods. First, it produces satisfying results with any kind of material: we demonstrate results obtained with Phong and Ashikmin BRDFs, Cartoon shading, sub-Lambertian materials, and perfectly reflective or refractive objects. Second, it imposes no restriction on lighting environment: it does not require a dense sampling of lighting directions and works even with a single light. Third, it makes it possible to enhance surface shape through the use of precomputed radiance data such as Ambient Occlusion, Prefiltered Environment Maps or Lit Spheres. Our novel approach works in real-time on modern graphics hardware and is faster than previous techniques

    Real-time rendering of realistic surface diffraction using low-rank factorisation

    No full text
    We propose a novel approach for real-time rendering of diffraction effects in surface reflectance in arbitrary environments. Such renderings are usually extremely expensive as they require the computation of a convolution at real-time framerates. In the case of diffraction, the diffraction lobes usually have high frequency details that can only be captured with high resolution convolution kernels which make calculations even more expensive. Our method uses a low rank factorisation of the diffraction lookup table to approximate a 2D convolution kernel by two simpler low rank kernels which allow the computation of the convolution at real-time framerates using two rendering passes. We show realistic renderings in arbitrary environments and achieve a performance from 50 to 100 FPS making possible to use such a technique in real-time applications such as video games and VR

    Autonomous Discovery of Motor Constraints in an Intrinsically-Motivated Vocal Learner

    Get PDF
    This work introduces new results on the modeling of early-vocal development using artificial intelligent cognitive architectures and a simulated vocal tract. The problem is addressed using intrinsically-motivated learning algorithms for autonomous sensorimotor exploration, a kind of algorithm belonging to the active learning architectures family. The artificial agent is able to autonomously select goals to explore its own sensorimotor system in regions where its competence to execute intended goals is improved. We propose to include a somatosensory system to provide a proprioceptive feedback signal to reinforce learning through the autonomous discovery of motor constraints. Constraints are represented by a somatosensory model which is unknown beforehand to the learner. Both the sensorimotor and somatosensory system are modeled using Gaussian mixture models. We argue that using an architecture which includes a somatosensory model would reduce redundancy in the sensorimotor model and drive the learning process more efficiently than algorithms taking into account only auditory feedback. The role of this proposed system is to predict whether an undesired collision within the vocal tract under a certain motor configuration is likely to occur. Thus, compromised motor configurations are rejected, guaranteeing that the agent is less prone to violate its own constraints.Peer ReviewedPostprint (author's final draft

    Deep Appearance Maps

    Get PDF
    We propose a deep representation of appearance, i.e. the relation of color, surface orientation, viewer position, material and illumination. Previous approaches have used deep learning to extract classic appearance representations relating to reflectance model parameters (e.g. Phong) or illumination (e.g. HDR environment maps). We suggest to directly represent appearance itself as a network we call a deep appearance map (DAM). This is a 4D generalization over 2D reflectance maps, which held the view direction fixed. First, we show how a DAM can be learned from images or video frames and later be used to synthesize appearance, given new surface orientations and viewer positions. Second, we demonstrate how another network can be used to map from an image or video frames to a DAM network to reproduce this appearance, without using a lengthy optimization such as stochastic gradient descent (learning-to-learn). Finally, we show the example of an appearance estimation-and-segmentation task, mapping from an image showing multiple materials to multiple deep appearance maps

    Deep Appearance Maps

    Get PDF
    We propose a deep representation of appearance, i. e., the relation of color, surface orientation, viewer position, material and illumination. Previous approaches have useddeep learning to extract classic appearance representationsrelating to reflectance model parameters (e. g., Phong) orillumination (e. g., HDR environment maps). We suggest todirectly represent appearance itself as a network we call aDeep Appearance Map (DAM). This is a 4D generalizationover 2D reflectance maps, which held the view direction fixed. First, we show how a DAM can be learned from images or video frames and later be used to synthesize appearance, given new surface orientations and viewer positions. Second, we demonstrate how another network can be used to map from an image or video frames to a DAM network to reproduce this appearance, without using a lengthy optimization such as stochastic gradient descent (learning-to-learn). Finally, we show the example of an appearance estimation-and-segmentation task, mapping from an image showingmultiple materials to multiple deep appearance maps
    • …
    corecore