1,762 research outputs found

    Line sampling in participating media

    Get PDF
    Participating media, such as fog, fire, dust, and smoke, surrounds us in our daily life. Rendering participating media efficiently has always been a challenging task in physically based rendering. Line sampling has been derived to be an alternative method in direct lighting recently. Since line sampling takes visibility into account, it could reduce variance in the same render time compared to point sampling. We leverage the benefits of line sampling in the context of evaluating direct lighting in participating media. We express the direct lighting as a three-dimensional integral and perform line sampling in any one of them. We show how to apply multiple importance sampling (MIS) with common point samples, improving the robustness of estimators. We also show the drawbacks of MIS between different directions of lines. We demonstrate the efficiency of line sampling in participating media and discuss the results with different setups, such as different phase functions and occluder size

    The self-coherent camera as a focal plane fine phasing sensor

    Full text link
    Direct imaging of Earth-like exoplanets requires high contrast imaging capability and high angular resolution. Primary mirror segmentation is a key technological solution for large-aperture telescopes because it opens the path toward significantly increasing the angular resolution. The segments are kept aligned by an active optics system that must reduce segment misalignments below tens of nm RMS to achieve the high optical quality required for astronomical science programs. The development of cophasing techniques is mandatory for the next generation of space- and ground-based segmented telescopes, which both share the need for increasing spatial resolution. We propose a new focal plane cophasing sensor that exploits the scientific image of a coronagraphic instrument to retrieve simultaneously piston and tip-tilt misalignments. The self-coherent camera phasing sensor (SCC-PS) adequately combines the SCC properties to segmented telescope architectures with adapted segment misalignment estimators and image processing. An overview of the system architecture, and a thorough performance and sensitivity analysis, including a closed-loop efficiency, are presented by means of numerical simulations. The SCC-PS estimates simultaneously piston and tip-tilt misalignments and corrects them in closed-loop operation. The SCC-PS does not require any a priori on the signal at the segment boundaries or any dedicated optical path. It has a moderate sensitivity to misalignments, virtually none to pupil shear, and is insensitive to segment gaps and edge effects. Primary mirror phasing can be achieved with bright natural guide star. The SCC-PS is a noninvasive concept and an efficient phasing sensor from the image domain. It is an attractive candidate for segment cophasing at the instrument level or alternatively at the telescope level, as usually envisioned in current space- and ground-based observatories.Comment: 10 pages. 9 figures. Accepted for publication in Astronomy & Astrophysic

    Statistical computation with kernels

    Get PDF
    Modern statistical inference has seen a tremendous increase in the size and complexity of models and datasets. As such, it has become reliant on advanced com- putational tools for implementation. A first canonical problem in this area is the numerical approximation of integrals of complex and expensive functions. Numerical integration is required for a variety of tasks, including prediction, model comparison and model choice. A second canonical problem is that of statistical inference for models with intractable likelihoods. These include models with intractable normal- isation constants, or models which are so complex that their likelihood cannot be evaluated, but from which data can be generated. Examples include large graphical models, as well as many models in imaging or spatial statistics. This thesis proposes to tackle these two problems using tools from the kernel methods and Bayesian non-parametrics literature. First, we analyse a well-known algorithm for numerical integration called Bayesian quadrature, and provide consis- tency and contraction rates. The algorithm is then assessed on a variety of statistical inference problems, and extended in several directions in order to reduce its compu- tational requirements. We then demonstrate how the combination of reproducing kernels with Stein’s method can lead to computational tools which can be used with unnormalised densities, including numerical integration and approximation of probability measures. We conclude by studying two minimum distance estimators derived from kernel-based statistical divergences which can be used for unnormalised and generative models. In each instance, the tractability provided by reproducing kernels and their properties allows us to provide easily-implementable algorithms whose theoretical foundations can be studied in depth

    Analysis of Sample Correlations for Monte Carlo Rendering

    Get PDF
    Modern physically based rendering techniques critically depend on approximating integrals of high dimensional functions representing radiant light energy. Monte Carlo based integrators are the choice for complex scenes and effects. These integrators work by sampling the integrand at sample point locations. The distribution of these sample points determines convergence rates and noise in the final renderings. The characteristics of such distributions can be uniquely represented in terms of correlations of sampling point locations. Hence, it is essential to study these correlations to understand and adapt sample distributions for low error in integral approximation. In this work, we aim at providing a comprehensive and accessible overview of the techniques developed over the last decades to analyze such correlations, relate them to error in integrators, and understand when and how to use existing sampling algorithms for effective rendering workflows.publishe

    Hierarchical Variance Reduction Techniques for Monte Carlo Rendering

    Get PDF
    Ever since the first three-dimensional computer graphics appeared half a century ago, the goal has been to model and simulate how light interacts with materials and objects to form an image. The ultimate goal is photorealistic rendering, where the created images reach a level of accuracy that makes them indistinguishable from photographs of the real world. There are many applications ñ visualization of products and architectural designs yet to be built, special effects, computer-generated films, virtual reality, and video games, to name a few. However, the problem has proven tremendously complex; the illumination at any point is described by a recursive integral to which a closed-form solution seldom exists. Instead, computer simulation and Monte Carlo methods are commonly used to statistically estimate the result. This introduces undesirable noise, or variance, and a large body of research has been devoted to finding ways to reduce the variance. I continue along this line of research, and present several novel techniques for variance reduction in Monte Carlo rendering, as well as a few related tools. The research in this dissertation focuses on using importance sampling to pick a small set of well-distributed point samples. As the primary contribution, I have developed the first methods to explicitly draw samples from the product of distant high-frequency lighting and complex reflectance functions. By sampling the product, low noise results can be achieved using a very small number of samples, which is important to minimize the rendering times. Several different hierarchical representations are explored to allow efficient product sampling. In the first publication, the key idea is to work in a compressed wavelet basis, which allows fast evaluation of the product. Many of the initial restrictions of this technique were removed in follow-up work, allowing higher-resolution uncompressed lighting and avoiding precomputation of reflectance functions. My second main contribution is to present one of the first techniques to take the triple product of lighting, visibility and reflectance into account to further reduce the variance in Monte Carlo rendering. For this purpose, control variates are combined with importance sampling to solve the problem in a novel way. A large part of the technique also focuses on analysis and approximation of the visibility function. To further refine the above techniques, several useful tools are introduced. These include a fast, low-distortion map to represent (hemi)spherical functions, a method to create high-quality quasi-random points, and an optimizing compiler for analyzing shaders using interval arithmetic. The latter automatically extracts bounds for importance sampling of arbitrary shaders, as opposed to using a priori known reflectance functions. In summary, the work presented here takes the field of computer graphics one step further towards making photorealistic rendering practical for a wide range of uses. By introducing several novel Monte Carlo methods, more sophisticated lighting and materials can be used without increasing the computation times. The research is aimed at domain-specific solutions to the rendering problem, but I believe that much of the new theory is applicable in other parts of computer graphics, as well as in other fields

    Physically Based Rendering of Synthetic Objects in Real Environments

    Full text link
    corecore