215 research outputs found

    Extending stochastic resonance for neuron models to general Levy noise

    Get PDF
    A recent paper by Patel and Kosko (2008) demonstrated stochastic resonance (SR) for general feedback continuous and spiking neuron models using additive Levy noise constrained to have finite second moments. In this brief, we drop this constraint and show that their result extends to general Levy noise models. We achieve this by showing that �¿large jump�¿ discontinuities in the noise can be controlled so as to allow the stochastic model to tend to a deterministic one as the noise dissipates to zero. SR then follows by a �¿forbidden intervals�¿ theorem as in Patel and Kosko's paper

    Neural Free-Viewpoint Relighting for Glossy Indirect Illumination

    Full text link
    Precomputed Radiance Transfer (PRT) remains an attractive solution for real-time rendering of complex light transport effects such as glossy global illumination. After precomputation, we can relight the scene with new environment maps while changing viewpoint in real-time. However, practical PRT methods are usually limited to low-frequency spherical harmonic lighting. All-frequency techniques using wavelets are promising but have so far had little practical impact. The curse of dimensionality and much higher data requirements have typically limited them to relighting with fixed view or only direct lighting with triple product integrals. In this paper, we demonstrate a hybrid neural-wavelet PRT solution to high-frequency indirect illumination, including glossy reflection, for relighting with changing view. Specifically, we seek to represent the light transport function in the Haar wavelet basis. For global illumination, we learn the wavelet transport using a small multi-layer perceptron (MLP) applied to a feature field as a function of spatial location and wavelet index, with reflected direction and material parameters being other MLP inputs. We optimize/learn the feature field (compactly represented by a tensor decomposition) and MLP parameters from multiple images of the scene under different lighting and viewing conditions. We demonstrate real-time (512 x 512 at 24 FPS, 800 x 600 at 13 FPS) precomputed rendering of challenging scenes involving view-dependent reflections and even caustics.Comment: 13 pages, 9 figures, to appear in cgf proceedings of egsr 202

    A Precomputed Polynomial Representation for Interactive BRDF Editing with Global Illumination

    Get PDF
    The ability to interactively edit BRDFs in their final placement within a computer graphics scene is vital to making informed choices for material properties. We significantly extend previous work on BRDF editing for static scenes (with fixed lighting and view), by developing a precomputed polynomial representation that enables interactive BRDF editing with global illumination. Unlike previous recomputation based rendering techniques, the image is not linear in the BRDF when considering interreflections. We introduce a framework for precomputing a multi-bounce tensor of polynomial coefficients, that encapsulates the nonlinear nature of the task. Significant reductions in complexity are achieved by leveraging the low-frequency nature of indirect light. We use a high-quality representation for the BRDFs at the first bounce from the eye, and lower-frequency (often diffuse) versions for further bounces. This approximation correctly captures the general global illumination in a scene, including color-bleeding, near-field object reflections, and even caustics. We adapt Monte Carlo path tracing for precomputing the tensor of coefficients for BRDF basis functions. At runtime, the high-dimensional tensors can be reduced to a simple dot product at each pixel for rendering. We present a number of examples of editing BRDFs in complex scenes, with interactive feedback rendered with global illumination

    Real-time Cinematic Design Of Visual Aspects In Computer-generated Images

    Get PDF
    Creation of visually-pleasing images has always been one of the main goals of computer graphics. Two important components are necessary to achieve this goal --- artists who design visual aspects of an image (such as materials or lighting) and sophisticated algorithms that render the image. Traditionally, rendering has been of greater interest to researchers, while the design part has always been deemed as secondary. This has led to many inefficiencies, as artists, in order to create a stunning image, are often forced to resort to the traditional, creativity-baring, pipelines consisting of repeated rendering and parameter tweaking. Our work shifts the attention away from the rendering problem and focuses on the design. We propose to combine non-physical editing with real-time feedback and provide artists with efficient ways of designing complex visual aspects such as global illumination or all-frequency shadows. We conform to existing pipelines by inserting our editing components into existing stages, hereby making editing of visual aspects an inherent part of the design process. Many of the examples showed in this work have been, until now, extremely hard to achieve. The non-physical aspect of our work enables artists to express themselves in more creative ways, not limited by the physical parameters of current renderers. Real-time feedback allows artists to immediately see the effects of applied modifications and compatibility with existing workflows enables easy integration of our algorithms into production pipelines

    Improving Shape Depiction under Arbitrary Rendering

    Get PDF
    International audienceBased on the observation that shading conveys shape information through intensity gradients, we present a new technique called Radiance Scaling that modifies the classical shading equations to offer versatile shape depiction functionalities. It works by scaling reflected light intensities depending on both surface curvature and material characteristics. As a result, diffuse shading or highlight variations become correlated to surface feature variations, enhancing concavities and convexities. The first advantage of such an approach is that it produces satisfying results with any kind of material for direct and global illumination: we demonstrate results obtained with Phong and Ashikmin-Shirley BRDFs, Cartoon shading, sub-Lambertian materials, perfectly reflective or refractive objects. Another advantage is that there is no restriction to the choice of lighting environment: it works with a single light, area lights, and inter-reflections. Third, it may be adapted to enhance surface shape through the use of precomputed radiance data such as Ambient Occlusion, Prefiltered Environment Maps or Lit Spheres. Finally, our approach works in real-time on modern graphics hardware making it suitable for any interactive 3D visualization

    Parallel progressive precomputed radiance transfer

    Get PDF
    Precomputed Radiance Transport (PRT) was introduced as a technique to enable interactive navigation and distant environmental real time relighting of rigid scenes. Evaluating radiance transport is, however, a computationally very demanding task, which precludes PRT's utilization during the model design phase, since the user must wait for long periods of time before being able to light and navigate within the model. This paper proposes and validates an approach to provide visual feedback to the user as soon as possible, within PRT context. By resorting to parallel processing and progressive refinement, the user is quickly presented with a lower lighting resolution of the virtual model. This is then progressively refined by incrementally increasing the number of incident directions taken into account on transport computations. PRT is, however, a complex algorithm that requires frequent collective communications of huge volumes of data, thus constraining the maximum achievable speedup on a parallel system. This issue is analysed and an alternative workload distribution is proposed and evaluated on a 12 node dual processor cluster. The final solution ensures a good resource utilization rate, reducing response times from dozens of seconds to a few hundred milliseconds.Fundação para a Ciências e a Tecnologia - Project SEARCH - SErvices and Advanced Research Computing with HTC/HPC clusters

    Locally Adaptive Products for Genuine Spherical Harmonic Lighting

    Get PDF
    Precomputed radiance transfer techniques have been broadly used for supporting complex illumination effects on diffuse and glossy objects. Although working with the wavelet domain is efficient in handling all-frequency illumination, the spherical harmonics domain is more convenient for interactively changing lights and views on the fly due to the rotational invariant nature of the spherical harmonic domain. For interactive lighting, however, the number of coefficients must be limited and the high orders of coefficients have to be eliminated. Therefore spherical harmonic lighting has been preferred and practiced only for interactive soft-diffuse lighting. In this paper, we propose a simple but practical filtering solution using locally adaptive products of high-order harmonic coefficients within the genuine spherical harmonic lighting framework. Our approach works out on the fly in two folds. We first conduct multi-level filtering on vertices in order to determine regions of interests, where the high orders of harmonics are necessary for high frequency lighting. The initially determined regions of interests are then refined through filling in the incomplete regions by traveling the neighboring vertices. Even not relying on graphics hardware, the proposed method allows to compute high order products of spherical harmonic lighting for both diffuse and specular lighting

    View-dependent precomputed light transport using non-linear Gaussian function approximations

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, February 2006.Includes bibliographical references (p. 43-46).We propose a real-time method for rendering rigid objects with complex view-dependent effects under distant all-frequency lighting. Existing precomputed light transport approaches can render rich global illumination effects, but high-frequency view-dependent effects such as sharp highlights remain a challenge. We introduce a new representation of the light transport operator based on sums of Gaussians. The non-linear parameters of the representation allow for 1) arbitrary bandwidth because scale is encoded as a direct parameter; and 2) high-quality interpolation across view and mesh triangles because we interpolate the average direction of the incoming light, thereby preventing linear cross-fading artifacts. However, fitting the precomputed light transport data to this new representation requires solving a non-linear regression problem that is more involved than traditional linear and non-linear (truncation) approximation techniques. We present a new data fitting method based on optimization that includes energy terms aimed at enforcing good interpolation. We demonstrate that our method achieves high visual quality for a small storage cost and fast rendering time.by Paul Elijah Green.S.M
    corecore