39 research outputs found

    The fast wavelet X-ray transform

    Get PDF
    The wavelet X-ray transform computes one-dimensional wavelet transforms along lines in Euclidian space in order to perform a directional time-scale analysis of functions in several variables. A fast algorithm is proposed which executes this transformation starting with values given on a cartesian grid that represent the underlying function. The algorithm involves a rotation step and wavelet analysis/synthesis steps. The number of computations required is of the same order as the number of data involved. The analysis/synthesis steps are executed by the pyramid algorithm which is known to have this computational advantage. The rotation step makes use of a wavelet interpolation scheme. The order of computations is limited here due to the localization of the wavelets. The rotation step is executed in an optimal way by means of quasi-interpolation methods using (bi-)orthogonal wavelets

    Well-Distributed Sequences: Number Theory, Optimal Transport, and Potential Theory

    Get PDF
    The purpose of this dissertation will be to examine various ways of measuring how uniformly distributed a sequence of points on compact manifolds and finite combinatorial graphs can be, providing bounds and novel explicit algorithms to pick extremely uniform points, as well as connecting disparate branches of mathematics such as Number Theory and Optimal Transport. Chapter 1 sets the stage by introducing some of the fundamental ideas and results that will be used consistently throughout the thesis: we develop and establish Weyl\u27s Theorem, the definition of discrepancy, LeVeque\u27s Inequality, the Erdős-Turán Inequality, Koksma-Hlawka Inequality, and Schmidt\u27s Theorem about Irregularities of Distribution. Chapter 2 introduces the Monge-Kantorovich transport problem with special emphasis on the Benamou-Brenier Formula (from 2000) and Peyre\u27s inequality (from 2018). Chapter 3 explores Peyre\u27s Inequality in further depth, considering how specific bounds on the Wasserstein distance between a point measure and the uniform measure may be obtained using it, in particular in terms of the Green\u27s function of the Laplacian on a manifold. We also show how a smoothing procedure can be applied by propagating the heat equation on probability mass in order to get stronger bounds on transport distance using well-known properties of the heat equation. In Chapter 4, we turn to the primary question of the thesis: how to select points on a space which are as uniformly distributed as possible. We consider various diverse approaches one might attempt: an ergodic approach iterating functions with good mixing properties; a dyadic approach introduced in a 1975 theorem of Kakutani on proportional splittings on intervals; and a completely novel potential theoretic approach, assigning energy to point configurations and greedily minimizing the total potential arising from pair-wise point interactions. Such energy minimization questions are certainly not new, in the static setting--physicist Thomson posed the question of how to minimize the potential of electrons on a sphere as far back as 1904. However, a greedy approach to uniform distribution via energy minimization is novel, particularly through the lens of Wasserstein, and yields provably Wasserstein-optimal point sequences using the Green\u27s function of the Laplacian as our energy function on manifolds of dimension at least 3 (with dimension 2 losing at most a square root log factor from the optimal bound). We connect this to known results from Graham, Pausinger, and Proinov regarding best possible uniform bounds on the Wasserstein 2-distance of point sequences in the unit interval. We also present many open questions and conjectures on the optimal asymptotic bounds for total energy of point configurations and the growth of the total energy function as points are added, motivated by numerical investigations that display remarkably well-behaved qualities in the dynamical system induced by greedy minimization. In Chapter 5, we consider specific point sequences and bounds on the transport distance from the point measure they generate to the uniform measure. We provide provably optimal rates for the van der Corput sequence, the Kronecker sequence, regular grids and the measures induced by quadratic residues in a field of prime order. We also prove an upper bound for higher degree monomial residues in fields of prime order, and conjecture this to be optimal. In Chapter 6, we consider numerical integration error bounds over Lipschitz functions, asking how closely we can estimate the integral of a function by averaging its values at finitely many points. This is a rather classical question that was answered completely by Bakhalov in 1959 and has since become a standard example (`the easiest case which is perfectly understood\u27). Somewhat surprisingly perhaps, we show that the result is not sharp and improve it in two ways: by refining the function space and by proving that these results can be true uniformly along a subsequence. These bounds refine existing results that were widely considered to be optimal, and we show the intimate connection between transport distance and integration error. Our results are new even for the classical discrete grid. In Chapter 7, we study the case of finite graphs--we show that the fundamental question underlying this thesis can also be meaningfully posed on finite graphs where it leads to a fascinating combinatorial problem. We show that the philosophy introduced in Chapter 4 can be meaningfully adapted and obtain a potential-theoretic algorithm that produces such a sequence on graphs. We show that, using spectral techniques, we are able to obtain empirically strong bounds on the 1-Wasserstein distance between measures on subsets of vertices and the uniform measure, which for graphs of large diameter are much stronger than the trivial diameter bound

    zkQMC: Zero-Knowledge Proofs For (Some) Probabilistic Computations Using Quasi-Randomness

    Get PDF
    We initiate research into efficiently embedding probabilistic computations in probabilistic proofs by introducing techniques for capturing Monte Carlo methods and Las Vegas algorithms in zero knowledge and exploring several potential applications of these techniques. We design and demonstrate a technique for proving the integrity of certain randomized computations, such as uncertainty quantification methods, in non-interactive zero knowledge (NIZK) by replacing conventional randomness with low-discrepancy sequences. This technique, known as the Quasi-Monte Carlo (QMC) method, functions as a form of weak algorithmic derandomization to efficiently produce adversarial-resistant worst-case uncertainty bounds for the results of Monte Carlo simulations. The adversarial resistance provided by this approach allows the integrity of results to be verifiable both in interactive and non-interactive zero knowledge without the need for additional statistical or cryptographic assumptions. To test these techniques, we design a custom domain specific language and implement an associated compiler toolchain that builds zkSNARK gadgets for expressing QMC methods. We demonstrate the power of this technique by using this framework to benchmark zkSNARKs for various examples in statistics and physics. Using NN samples, our framework produces zkSNARKs for numerical integration problems of dimension dd with O((logN)dN)O\left(\frac{(\log N)^d}{N}\right) worst-case error bounds. Additionally, we prove a new result using discrepancy theory to efficiently and soundly estimate the output of computations with uncertain data with an O(dlogNNd)O\left(d\frac{\log N}{\sqrt[d]{N}}\right) worst-case error bound. Finally, we show how this work can be applied more generally to allow zero-knowledge proofs to capture a subset of decision problems in BPP\mathsf{BPP}, RP\mathsf{RP}, and ZPP\mathsf{ZPP}

    Semiparametric regression analysis with missing response at random

    Get PDF
    We develop inference tools in a semiparametric partially linear regression model with missing response data. A class of estimators is defined that includes as special cases: a semiparametric regression imputation estimator, a marginal average estimator and a (marginal) propensity score weighted estimator. We show that any of our class of estimators is asymptotically normal. The three special estimators have the same asymptotic variance. They achieve the semiparametric efficiency bound in the homoskedastic Gaussian case. We show that the Jackknife method can be used to consistently estimate the asymptotic variance. Our model and estimators are defined with a view to avoid the curse of dimensionality, that severely limits the applicability of existing methods. The empirical likelihood method is developed. It is shown that when missing responses are imputed using the semiparametric regression method the empirical log-likelihood is asymptotically a scaled chi-square variable. An adjusted empirical log-likelihood ratio, which is asymptotically standard chi-square, is obtained. Also, a bootstrap empirical log-likelihood ratio is derived and its distribution is used to approximate that of the imputed empirical log-likelihood ratio. A simulation study is conducted to compare the adjusted and bootstrap empirical likelihood with the normal approximation based method in terms of coverage accuracies and average lengths of confidence intervals. Based on biases and standard errors, a comparison is also made by simulation between the proposed estimators and the related estimators.
    corecore