3,142 research outputs found

    An Integral geometry based method for fast form-factor computation

    Get PDF
    Monte Carlo techniques have been widely used in rendering algorithms for local integration. For example, to compute the contribution of a patch to the luminance of another. In the present paper we propose an algorithm based on Integral geometry where Monte Carlo is applied globally. We give some results of the implementation to validate the proposition and we study the error of the technique, as well as its complexity.Postprint (published version

    Wavelet encoding and variable resolution progressive transmission

    Get PDF
    Progressive transmission is a method of transmitting and displaying imagery in stages of successively improving quality. The subsampled lowpass image representations generated by a wavelet transformation suit this purpose well, but for best results the order of presentation is critical. Candidate data for transmission are best selected using dynamic prioritization criteria generated from image contents and viewer guidance. We show that wavelets are not only suitable but superior when used to encode data for progressive transmission at non-uniform resolutions. This application does not preclude additional compression using quantization of highpass coefficients, which to the contrary results in superior image approximations at low data rates

    Towards interactive global illumination effects via sequential Monte Carlo adaptation

    Get PDF
    Journal ArticleThis paper presents a novel method that effectively combines both control variates and importance sampling in a sequential Monte Carlo context while handling general single-bounce global illumination effects. The radiance estimates computed during the rendering process are cached in an adaptive per-pixel structure that defines dynamic predicate functions for both variance reduction techniques and guarantees well-behaved PDFs, yielding continually increasing efficiencies thanks to a marginal computational overhead

    Speculative Approximations for Terascale Analytics

    Full text link
    Model calibration is a major challenge faced by the plethora of statistical analytics packages that are increasingly used in Big Data applications. Identifying the optimal model parameters is a time-consuming process that has to be executed from scratch for every dataset/model combination even by experienced data scientists. We argue that the incapacity to evaluate multiple parameter configurations simultaneously and the lack of support to quickly identify sub-optimal configurations are the principal causes. In this paper, we develop two database-inspired techniques for efficient model calibration. Speculative parameter testing applies advanced parallel multi-query processing methods to evaluate several configurations concurrently. The number of configurations is determined adaptively at runtime, while the configurations themselves are extracted from a distribution that is continuously learned following a Bayesian process. Online aggregation is applied to identify sub-optimal configurations early in the processing by incrementally sampling the training dataset and estimating the objective function corresponding to each configuration. We design concurrent online aggregation estimators and define halting conditions to accurately and timely stop the execution. We apply the proposed techniques to distributed gradient descent optimization -- batch and incremental -- for support vector machines and logistic regression models. We implement the resulting solutions in GLADE PF-OLA -- a state-of-the-art Big Data analytics system -- and evaluate their performance over terascale-size synthetic and real datasets. The results confirm that as many as 32 configurations can be evaluated concurrently almost as fast as one, while sub-optimal configurations are detected accurately in as little as a 1/20th1/20^{\text{th}} fraction of the time

    Langevin and Hamiltonian based Sequential MCMC for Efficient Bayesian Filtering in High-dimensional Spaces

    Full text link
    Nonlinear non-Gaussian state-space models arise in numerous applications in statistics and signal processing. In this context, one of the most successful and popular approximation techniques is the Sequential Monte Carlo (SMC) algorithm, also known as particle filtering. Nevertheless, this method tends to be inefficient when applied to high dimensional problems. In this paper, we focus on another class of sequential inference methods, namely the Sequential Markov Chain Monte Carlo (SMCMC) techniques, which represent a promising alternative to SMC methods. After providing a unifying framework for the class of SMCMC approaches, we propose novel efficient strategies based on the principle of Langevin diffusion and Hamiltonian dynamics in order to cope with the increasing number of high-dimensional applications. Simulation results show that the proposed algorithms achieve significantly better performance compared to existing algorithms

    Efficient Many-Light Rendering of Scenes with Participating Media

    Get PDF
    We present several approaches based on virtual lights that aim at capturing the light transport without compromising quality, and while preserving the elegance and efficiency of many-light rendering. By reformulating the integration scheme, we obtain two numerically efficient techniques; one tailored specifically for interactive, high-quality lighting on surfaces, and one for handling scenes with participating media

    Probabilistic Framework for Sensor Management

    Get PDF
    A probabilistic sensor management framework is introduced, which maximizes the utility of sensor systems with many different sensing modalities by dynamically configuring the sensor system in the most beneficial way. For this purpose, techniques from stochastic control and Bayesian estimation are combined such that long-term effects of possible sensor configurations and stochastic uncertainties resulting from noisy measurements can be incorporated into the sensor management decisions

    Fast learning rates in statistical inference through aggregation

    Get PDF
    We develop minimax optimal risk bounds for the general learning task consisting in predicting as well as the best function in a reference set G\mathcal{G} up to the smallest possible additive term, called the convergence rate. When the reference set is finite and when nn denotes the size of the training data, we provide minimax convergence rates of the form C(log⁥∣G∣n)vC(\frac{\log|\mathcal{G}|}{n})^v with tight evaluation of the positive constant CC and with exact 0<v≀10<v\le1, the latter value depending on the convexity of the loss function and on the level of noise in the output distribution. The risk upper bounds are based on a sequential randomized algorithm, which at each step concentrates on functions having both low risk and low variance with respect to the previous step prediction function. Our analysis puts forward the links between the probabilistic and worst-case viewpoints, and allows to obtain risk bounds unachievable with the standard statistical learning approach. One of the key ideas of this work is to use probabilistic inequalities with respect to appropriate (Gibbs) distributions on the prediction function space instead of using them with respect to the distribution generating the data. The risk lower bounds are based on refinements of the Assouad lemma taking particularly into account the properties of the loss function. Our key example to illustrate the upper and lower bounds is to consider the LqL_q-regression setting for which an exhaustive analysis of the convergence rates is given while qq ranges in [1;+∞[[1;+\infty[.Comment: Published in at http://dx.doi.org/10.1214/08-AOS623 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Globally Adaptive Control Variate for Robust Numerical Integration

    Get PDF
    International audienceMany methods in computer graphics require the integration of functions on low- to-middle-dimensional spaces. However, no available method can handle all the possible integrands accurately and rapidly. This paper presents a robust numerical integration method, able to handle arbitrary non-singular scalar or vector-valued functions defined on low-to-middle-dimensional spaces. Our method combines control variate, globally adaptive subdivision and Monte-Carlo estimation to achieve fast and accurate computations of any non-singular integral. The runtime is linear with respect to standard deviation while standard Monte-Carlo methods are quadratic. We additionally show through numerical tests that our method is extremely stable from a computation time and memory footprint point-of-view, assessing its robustness. We demonstrate our method on a partic- ipating media voxelization application, which requires the computation of several millions integrals for complex media
    • 

    corecore