75 research outputs found

    A survey of information-based complexity

    Get PDF
    AbstractWe survey some recent results in information-based complexity. We focus on the worst case setting and also indicate some average case results

    Average Case Tractability of Non-homogeneous Tensor Product Problems

    Get PDF
    We study d-variate approximation problems in the average case setting with respect to a zero-mean Gaussian measure. Our interest is focused on measures having a structure of non-homogeneous linear tensor product, where covariance kernel is a product of univariate kernels. We consider the normalized average error of algorithms that use finitely many evaluations of arbitrary linear functionals. The information complexity is defined as the minimal number n(h,d) of such evaluations for error in the d-variate case to be at most h. The growth of n(h,d) as a function of h^{-1} and d depends on the eigenvalues of the covariance operator and determines whether a problem is tractable or not. Four types of tractability are studied and for each of them we find the necessary and sufficient conditions in terms of the eigenvalues of univariate kernels. We illustrate our results by considering approximation problems related to the product of Korobov kernels characterized by a weights g_k and smoothnesses r_k. We assume that weights are non-increasing and smoothness parameters are non-decreasing. Furthermore they may be related, for instance g_k=g(r_k) for some non-increasing function g. In particular, we show that approximation problem is strongly polynomially tractable, i.e., n(h,d)\le C h^{-p} for all d and 0<h<1, where C and p are independent of h and d, iff liminf |ln g_k|/ln k >1. For other types of tractability we also show necessary and sufficient conditions in terms of the sequences g_k and r_k

    Lattice rule algorithms for multivariate approximation in the average case setting

    Get PDF
    AbstractWe study multivariate approximation for continuous functions in the average case setting. The space of d variate continuous functions is equipped with the zero mean Gaussian measure whose covariance function is the reproducing kernel of a weighted Korobov space with the smoothness parameter α>1 and weights γd,j for j=1,2,…,d. The weight γd,j moderates the behavior of functions with respect to the jth variable, and small γd,j means that functions depend weakly on the jth variable. We study lattice rule algorithms which approximate the Fourier coefficients of a function based on function values at lattice sample points. The generating vector for these lattice points is constructed by the component-by-component algorithm, and it is tailored for the approximation problem. Our main interest is when d is large, and we study tractability and strong tractability of multivariate approximation. That is, we want to reduce the initial average case error by a factor ε by using a polynomial number of function values in ε-1 and d in the case of tractability, and only polynomial in ε-1 in the case of strong tractability. Necessary and sufficient conditions on tractability and strong tractability are obtained by applying known general tractability results for the class of arbitrary linear functionals and for the class of function values. Strong tractability holds for the two classes in the average case setting iff supd⩾1∑j=1dγd,js<∞ for some positive s<1, and tractability holds iff supd⩾1∑j=1dγd,jt/log(d+1)<∞ for some positive t<1.. The previous results for the class of function values have been non-constructive. We provide a construction in this paper and prove tractability and strong tractability error bounds for lattice rule algorithms. This paper can be viewed as a continuation of our previous paper where we studied multivariate approximation for weighted Korobov spaces in the worst case setting. Many technical results from that paper are also useful for the average case setting. The exponents of ε-1 and d corresponding to our error bounds are not sharp. However, for α close to 1 and for slow decaying weights, we obtain almost the minimal exponent of ε-1. We also compare the results from the worst case and the average case settings in weighted Korobov spaces

    Finite-order weights imply tractability of multivariate integration

    Get PDF
    AbstractMultivariate integration of high dimension s occurs in many applications. In many such applications, for example in finance, integrands can be well approximated by sums of functions of just a few variables. In this situation the superposition (or effective) dimension is small, and we can model the problem with finite-order weights, where the weights describe the relative importance of each distinct group of variables up to a given order (where the order is the number of variables in a group), and ignore all groups of variables of higher order.In this paper we consider multivariate integration for the anchored and unanchored (non-periodic) Sobolev spaces equipped with finite-order weights. Our main interest is tractability and strong tractability of QMC algorithms in the worst-case setting. That is, we want to find how the minimal number of function values needed to reduce the initial error by a factor ε depends on s and ε−1. If there is no dependence on s, and only polynomial dependence on ε−1, we have strong tractability, whereas with polynomial dependence on both s and ε−1 we have tractability.We show that for the anchored Sobolev space we have strong tractability for arbitrary finite-order weights, whereas for the unanchored Sobolev space we have tractability for all bounded finite-order weights. In both cases, the dependence on ε−1 is quadratic. We can improve the dependence on ε−1 at the expense of polynomial dependence on s. For finite-order weights, we may achieve almost linear dependence on ε−1 with a polynomial dependence on s whose degree is proportional to the order of the weights.We show that these tractability bounds can be achieved by shifted lattice rules with generators computed by the component-by-component (CBC) algorithm. The computed lattice rules depend on the weights. Similar bounds can also be achieved by well-known low discrepancy sequences such as Halton, Sobol and Niederreiter sequences which do not depend on the weights. We prove that these classical low discrepancy sequences lead to error bounds with almost linear dependence on n−1 and polynomial dependence on d. We present explicit worst-case error bounds for shifted lattice rules and for the Niederreiter sequence. Better tractability and error bounds are possible for finite-order weights, and even for general weights if they satisfy certain conditions. We present conditions on general weights that guarantee tractability and strong tractability of multivariate integration

    Towards Machine Wald

    Get PDF
    The past century has seen a steady increase in the need of estimating and predicting complex systems and making (possibly critical) decisions with limited information. Although computers have made possible the numerical evaluation of sophisticated statistical models, these models are still designed \emph{by humans} because there is currently no known recipe or algorithm for dividing the design of a statistical model into a sequence of arithmetic operations. Indeed enabling computers to \emph{think} as \emph{humans} have the ability to do when faced with uncertainty is challenging in several major ways: (1) Finding optimal statistical models remains to be formulated as a well posed problem when information on the system of interest is incomplete and comes in the form of a complex combination of sample data, partial knowledge of constitutive relations and a limited description of the distribution of input random variables. (2) The space of admissible scenarios along with the space of relevant information, assumptions, and/or beliefs, tend to be infinite dimensional, whereas calculus on a computer is necessarily discrete and finite. With this purpose, this paper explores the foundations of a rigorous framework for the scientific computation of optimal statistical estimators/models and reviews their connections with Decision Theory, Machine Learning, Bayesian Inference, Stochastic Optimization, Robust Optimization, Optimal Uncertainty Quantification and Information Based Complexity.Comment: 37 page

    The method of minimal B-errors for large systems of linear equations with an arbitrary matrix

    No full text
    The author introduces a minimal B-error algorithm for interative techniques in solving the matrix equation Ax+b=0 following the general concepts introduced by G. H. Golub and R. S. Varga (1961). (MR0468134

    Замечания к методу Бэрстоу

    No full text

    Modyfikacja metody naprzemiennej kierunków i jej związek z metodą von Neumana

    No full text
    corecore