1,645 research outputs found

    W-NINE: a two-stage emulation platform for mobile and wireless systems

    Get PDF
    More and more applications and protocols are now running on wireless networks. Testing the implementation of such applications and protocols is a real challenge as the position of the mobile terminals and environmental effects strongly affect the overall performance. Network emulation is often perceived as a good trade-off between experiments on operational wireless networks and discrete-event simulations on Opnet or ns-2. However, ensuring repeatability and realism in network emulation while taking into account mobility in a wireless environment is very difficult. This paper proposes a network emulation platform, called W-NINE, based on off-line computations preceding online pattern-based traffic shaping. The underlying concepts of repeatability, dynamicity, accuracy and realism are defined in the emulation context. Two different simple case studies illustrate the validity of our approach with respect to these concepts

    Learning Object-Centric Neural Scattering Functions for Free-viewpoint Relighting and Scene Composition

    Full text link
    Photorealistic object appearance modeling from 2D images is a constant topic in vision and graphics. While neural implicit methods (such as Neural Radiance Fields) have shown high-fidelity view synthesis results, they cannot relight the captured objects. More recent neural inverse rendering approaches have enabled object relighting, but they represent surface properties as simple BRDFs, and therefore cannot handle translucent objects. We propose Object-Centric Neural Scattering Functions (OSFs) for learning to reconstruct object appearance from only images. OSFs not only support free-viewpoint object relighting, but also can model both opaque and translucent objects. While accurately modeling subsurface light transport for translucent objects can be highly complex and even intractable for neural methods, OSFs learn to approximate the radiance transfer from a distant light to an outgoing direction at any spatial location. This approximation avoids explicitly modeling complex subsurface scattering, making learning a neural implicit model tractable. Experiments on real and synthetic data show that OSFs accurately reconstruct appearances for both opaque and translucent objects, allowing faithful free-viewpoint relighting as well as scene composition. Project website: https://kovenyu.com/osf/Comment: Project website: https://kovenyu.com/osf/ Journal extension of arXiv:2012.08503. The first two authors contributed equally to this wor

    Doctor of Philosophy

    Get PDF
    dissertationReal-time global illumination is the next frontier in real-time rendering. In an attempt to generate realistic images, games have followed the film industry into physically based shading and will soon begin integrating global illumination techniques. Traditional methods require too much memory and too much time to compute for real-time use. With Modular and Delta Radiance Transfer we precompute a scene-independent, low-frequency basis that allows us to calculate complex indirect lighting calculations in a much lower dimensional subspace with a reduced memory footprint and real-time execution. The results are then applied as a light map on many different scenes. To improve the low frequency results, we also introduce a novel screen space ambient occlusion technique that allows us to generate a smoother result with fewer samples. These three techniques, low and high frequency used together, provide a viable indirect lighting solution that can be run in milliseconds on today's hardware, providing a useful new technique for indirect lighting in real-time graphics

    Neural Free-Viewpoint Relighting for Glossy Indirect Illumination

    Full text link
    Precomputed Radiance Transfer (PRT) remains an attractive solution for real-time rendering of complex light transport effects such as glossy global illumination. After precomputation, we can relight the scene with new environment maps while changing viewpoint in real-time. However, practical PRT methods are usually limited to low-frequency spherical harmonic lighting. All-frequency techniques using wavelets are promising but have so far had little practical impact. The curse of dimensionality and much higher data requirements have typically limited them to relighting with fixed view or only direct lighting with triple product integrals. In this paper, we demonstrate a hybrid neural-wavelet PRT solution to high-frequency indirect illumination, including glossy reflection, for relighting with changing view. Specifically, we seek to represent the light transport function in the Haar wavelet basis. For global illumination, we learn the wavelet transport using a small multi-layer perceptron (MLP) applied to a feature field as a function of spatial location and wavelet index, with reflected direction and material parameters being other MLP inputs. We optimize/learn the feature field (compactly represented by a tensor decomposition) and MLP parameters from multiple images of the scene under different lighting and viewing conditions. We demonstrate real-time (512 x 512 at 24 FPS, 800 x 600 at 13 FPS) precomputed rendering of challenging scenes involving view-dependent reflections and even caustics.Comment: 13 pages, 9 figures, to appear in cgf proceedings of egsr 202

    Neural Precomputed Radiance Transfer

    Get PDF
    OPAL-MesoInternational audienceRecent advances in neural rendering indicate immense promise for architectures that learn light transport, allowing efficient rendering of global illumination effects once such methods are trained. The training phase of these methods can be seen as a form of pre-computation, which has a long standing history in Computer Graphics. In particular, Pre-computed Radiance Transfer (PRT) achieves real-time rendering by freezing some variables of the scene (geometry, materials) and encoding the distribution of others, allowing interactive rendering at runtime. We adopt the same configuration as PRT – global illumination of static scenes under dynamic environment lighting – and investigate different neural network architectures, inspired by the design principles and theoretical analysis of PRT. We introduce four different architectures, and show that those based on knowledge of light transport models and PRT-inspired principles improve the quality of global illumination predictions at equal training time and network size, without the need for high-end ray-tracing hardware

    A line-binned treatment of opacities for the spectra and light curves from neutron star mergers

    Full text link
    The electromagnetic observations of GW170817 were able to dramatically increase our understanding of neutron star mergers beyond what we learned from gravitational waves alone. These observations provided insight on all aspects of the merger from the nature of the gamma-ray burst to the characteristics of the ejected material. The ejecta of neutron star mergers are expected to produce such electromagnetic transients, called kilonovae or macronovae. Characteristics of the ejecta include large velocity gradients, relative to supernovae, and the presence of heavy rr-process elements, which pose significant challenges to the accurate calculation of radiative opacities and radiation transport. For example, these opacities include a dense forest of bound-bound features arising from near-neutral lanthanide and actinide elements. Here we investigate the use of fine-structure, line-binned opacities that preserve the integral of the opacity over frequency. Advantages of this area-preserving approach over the traditional expansion-opacity formalism include the ability to pre-calculate opacity tables that are independent of the type of hydrodynamic expansion and that eliminate the computational expense of calculating opacities within radiation-transport simulations. Tabular opacities are generated for all 14 lanthanides as well as a representative actinide element, uranium. We demonstrate that spectral simulations produced with the line-binned opacities agree well with results produced with the more accurate continuous Monte Carlo Sobolev approach, as well as with the commonly used expansion-opacity formalism. Additional investigations illustrate the convergence of opacity with respect to the number of included lines, and elucidate sensitivities to different atomic physics approximations, such as fully and semi-relativistic approaches.Comment: 27 pages, 22 figures. arXiv admin note: text overlap with arXiv:1702.0299
    • …
    corecore