177 research outputs found

    Optimal Sup-norm Rates and Uniform Inference on Nonlinear Functionals of Nonparametric IV Regression

    Full text link
    This paper makes several important contributions to the literature about nonparametric instrumental variables (NPIV) estimation and inference on a structural function h0h_0 and its functionals. First, we derive sup-norm convergence rates for computationally simple sieve NPIV (series 2SLS) estimators of h0h_0 and its derivatives. Second, we derive a lower bound that describes the best possible (minimax) sup-norm rates of estimating h0h_0 and its derivatives, and show that the sieve NPIV estimator can attain the minimax rates when h0h_0 is approximated via a spline or wavelet sieve. Our optimal sup-norm rates surprisingly coincide with the optimal root-mean-squared rates for severely ill-posed problems, and are only a logarithmic factor slower than the optimal root-mean-squared rates for mildly ill-posed problems. Third, we use our sup-norm rates to establish the uniform Gaussian process strong approximations and the score bootstrap uniform confidence bands (UCBs) for collections of nonlinear functionals of h0h_0 under primitive conditions, allowing for mildly and severely ill-posed problems. Fourth, as applications, we obtain the first asymptotic pointwise and uniform inference results for plug-in sieve t-statistics of exact consumer surplus (CS) and deadweight loss (DL) welfare functionals under low-level conditions when demand is estimated via sieve NPIV. Empiricists could read our real data application of UCBs for exact CS and DL functionals of gasoline demand that reveals interesting patterns and is applicable to other markets.Comment: This paper is a major extension of Sections 2 and 3 of our Cowles Foundation Discussion Paper CFDP1923, Cemmap Working Paper CWP56/13 and arXiv preprint arXiv:1311.0412 [math.ST]. Section 3 of the previous version of this paper (dealing with data-driven choice of sieve dimension) is currently being revised as a separate pape

    Non-equispaced B-spline wavelets

    Full text link
    This paper has three main contributions. The first is the construction of wavelet transforms from B-spline scaling functions defined on a grid of non-equispaced knots. The new construction extends the equispaced, biorthogonal, compactly supported Cohen-Daubechies-Feauveau wavelets. The new construction is based on the factorisation of wavelet transforms into lifting steps. The second and third contributions are new insights on how to use these and other wavelets in statistical applications. The second contribution is related to the bias of a wavelet representation. It is investigated how the fine scaling coefficients should be derived from the observations. In the context of equispaced data, it is common practice to simply take the observations as fine scale coefficients. It is argued in this paper that this is not acceptable for non-interpolating wavelets on non-equidistant data. Finally, the third contribution is the study of the variance in a non-orthogonal wavelet transform in a new framework, replacing the numerical condition as a measure for non-orthogonality. By controlling the variances of the reconstruction from the wavelet coefficients, the new framework allows us to design wavelet transforms on irregular point sets with a focus on their use for smoothing or other applications in statistics.Comment: 42 pages, 2 figure

    Wavelet and its Applications

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Multi-level Monte Carlo Finite Element method for elliptic PDEs with stochastic coefficients

    Get PDF
    In Monte Carlo methods quadrupling the sample size halves the error. In simulations of stochastic partial differential equations (SPDEs), the total work is the sample size times the solution cost of an instance of the partial differential equation. A Multi-level Monte Carlo method is introduced which allows, in certain cases, to reduce the overall work to that of the discretization of one instance of the deterministic PDE. The model problem is an elliptic equation with stochastic coefficients. Multi-level Monte Carlo errors and work estimates are given both for the mean of the solutions and for higher moments. The overall complexity of computing mean fields as well as k-point correlations of the random solution is proved to be of log-linear complexity in the number of unknowns of a single Multi-level solve of the deterministic elliptic problem. Numerical examples complete the theoretical analysi

    Universal Scalable Robust Solvers from Computational Information Games and fast eigenspace adapted Multiresolution Analysis

    Get PDF
    We show how the discovery of robust scalable numerical solvers for arbitrary bounded linear operators can be automated as a Game Theory problem by reformulating the process of computing with partial information and limited resources as that of playing underlying hierarchies of adversarial information games. When the solution space is a Banach space BB endowed with a quadratic norm \|\cdot\|, the optimal measure (mixed strategy) for such games (e.g. the adversarial recovery of uBu\in B, given partial measurements [ϕi,u][\phi_i, u] with ϕiB\phi_i\in B^*, using relative error in \|\cdot\|-norm as a loss) is a centered Gaussian field ξ\xi solely determined by the norm \|\cdot\|, whose conditioning (on measurements) produces optimal bets. When measurements are hierarchical, the process of conditioning this Gaussian field produces a hierarchy of elementary bets (gamblets). These gamblets generalize the notion of Wavelets and Wannier functions in the sense that they are adapted to the norm \|\cdot\| and induce a multi-resolution decomposition of BB that is adapted to the eigensubspaces of the operator defining the norm \|\cdot\|. When the operator is localized, we show that the resulting gamblets are localized both in space and frequency and introduce the Fast Gamblet Transform (FGT) with rigorous accuracy and (near-linear) complexity estimates. As the FFT can be used to solve and diagonalize arbitrary PDEs with constant coefficients, the FGT can be used to decompose a wide range of continuous linear operators (including arbitrary continuous linear bijections from H0sH^s_0 to HsH^{-s} or to L2L^2) into a sequence of independent linear systems with uniformly bounded condition numbers and leads to O(NpolylogN)\mathcal{O}(N \operatorname{polylog} N) solvers and eigenspace adapted Multiresolution Analysis (resulting in near linear complexity approximation of all eigensubspaces).Comment: 142 pages. 14 Figures. Presented at AFOSR (Aug 2016), DARPA (Sep 2016), IPAM (Apr 3, 2017), Hausdorff (April 13, 2017) and ICERM (June 5, 2017

    Recognition of Occluded Object Using Wavelets

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Wavelet Theory

    Get PDF
    The wavelet is a powerful mathematical tool that plays an important role in science and technology. This book looks at some of the most creative and popular applications of wavelets including biomedical signal processing, image processing, communication signal processing, Internet of Things (IoT), acoustical signal processing, financial market data analysis, energy and power management, and COVID-19 pandemic measurements and calculations. The editor’s personal interest is the application of wavelet transform to identify time domain changes on signals and corresponding frequency components and in improving power amplifier behavior

    Data compression and harmonic analysis

    Get PDF
    In this paper we review some recent interactions between harmonic analysis and data compression. The story goes back of course to Shannon’
    corecore