45,658 research outputs found

    Reducing variance in univariate smoothing

    Full text link
    A variance reduction technique in nonparametric smoothing is proposed: at each point of estimation, form a linear combination of a preliminary estimator evaluated at nearby points with the coefficients specified so that the asymptotic bias remains unchanged. The nearby points are chosen to maximize the variance reduction. We study in detail the case of univariate local linear regression. While the new estimator retains many advantages of the local linear estimator, it has appealing asymptotic relative efficiencies. Bandwidth selection rules are available by a simple constant factor adjustment of those for local linear estimation. A simulation study indicates that the finite sample relative efficiency often matches the asymptotic relative efficiency for moderate sample sizes. This technique is very general and has a wide range of applications.Comment: Published at http://dx.doi.org/10.1214/009053606000001398 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Kernel density classification and boosting: an L2 sub analysis

    Get PDF
    Kernel density estimation is a commonly used approach to classification. However, most of the theoretical results for kernel methods apply to estimation per se and not necessarily to classification. In this paper we show that when estimating the difference between two densities, the optimal smoothing parameters are increasing functions of the sample size of the complementary group, and we provide a small simluation study which examines the relative performance of kernel density methods when the final goal is classification. A relative newcomer to the classification portfolio is “boosting”, and this paper proposes an algorithm for boosting kernel density classifiers. We note that boosting is closely linked to a previously proposed method of bias reduction in kernel density estimation and indicate how it will enjoy similar properties for classification. We show that boosting kernel classifiers reduces the bias whilst only slightly increasing the variance, with an overall reduction in error. Numerical examples and simulations are used to illustrate the findings, and we also suggest further areas of research

    Progressive Transient Photon Beams

    Get PDF
    In this work we introduce a novel algorithm for transient rendering in participating media. Our method is consistent, robust, and is able to generate animations of time-resolved light transport featuring complex caustic light paths in media. We base our method on the observation that the spatial continuity provides an increased coverage of the temporal domain, and generalize photon beams to transient-state. We extend the beam steady-state radiance estimates to include the temporal domain. Then, we develop a progressive version of spatio-temporal density estimations, that converges to the correct solution with finite memory requirements by iteratively averaging several realizations of independent renders with a progressively reduced kernel bandwidth. We derive the optimal convergence rates accounting for space and time kernels, and demonstrate our method against previous consistent transient rendering methods for participating media

    A Bistochastic Nonparametric Estimator

    Get PDF
    We explore the relevance of adopting a bistochastic nonparametric estimator. This estimator has two main implications. First, the estimator reduces variability according to the robust criterion of second-order stochastic (and Lorenz) dominance. This is a universally criterion in risk and welfare economics, which expands the applicability of nonparametric estimation in economics, for instance to the measurement of economic discrimination. Second, the bistochastic estimator produces smaller errors than do positive-weights nonparametric estimators, in terms of the bias-variance trade-off. This result is verified in a general simulation exercise. This improvement is due to a significant reduction in boundary bias, which makes the estimator itself useful in empirical applications. Finally, consistency, preservation of the mean value, and multidimensional extension are some other useful properties of this estimator.nonparametric estimation, second-order stochastic dominance, bistochastic estimator

    A Kernel-Based Calculation of Information on a Metric Space

    Full text link
    Kernel density estimation is a technique for approximating probability distributions. Here, it is applied to the calculation of mutual information on a metric space. This is motivated by the problem in neuroscience of calculating the mutual information between stimuli and spiking responses; the space of these responses is a metric space. It is shown that kernel density estimation on a metric space resembles the k-nearest-neighbor approach. This approach is applied to a toy dataset designed to mimic electrophysiological data
    • 

    corecore