55,411 research outputs found

    Rank-1 lattice rules for multivariate integration in spaces of permutation-invariant functions: Error bounds and tractability

    Full text link
    We study multivariate integration of functions that are invariant under permutations (of subsets) of their arguments. We find an upper bound for the nnth minimal worst case error and show that under certain conditions, it can be bounded independent of the number of dimensions. In particular, we study the application of unshifted and randomly shifted rank-11 lattice rules in such a problem setting. We derive conditions under which multivariate integration is polynomially or strongly polynomially tractable with the Monte Carlo rate of convergence O(n1/2)O(n^{-1/2}). Furthermore, we prove that those tractability results can be achieved with shifted lattice rules and that the shifts are indeed necessary. Finally, we show the existence of rank-11 lattice rules whose worst case error on the permutation- and shift-invariant spaces converge with (almost) optimal rate. That is, we derive error bounds of the form O(nλ/2)O(n^{-\lambda/2}) for all 1λ<2α1 \leq \lambda < 2 \alpha, where α\alpha denotes the smoothness of the spaces. Keywords: Numerical integration, Quadrature, Cubature, Quasi-Monte Carlo methods, Rank-1 lattice rules.Comment: 26 pages; minor changes due to reviewer's comments; the final publication is available at link.springer.co

    A quasi-Monte Carlo Method for an Optimal Control Problem Under Uncertainty

    Full text link
    We study an optimal control problem under uncertainty, where the target function is the solution of an elliptic partial differential equation with random coefficients, steered by a control function. The robust formulation of the optimization problem is stated as a high-dimensional integration problem over the stochastic variables. It is well known that carrying out a high-dimensional numerical integration of this kind using a Monte Carlo method has a notoriously slow convergence rate; meanwhile, a faster rate of convergence can potentially be obtained by using sparse grid quadratures, but these lead to discretized systems that are non-convex due to the involvement of negative quadrature weights. In this paper, we analyze instead the application of a quasi-Monte Carlo method, which retains the desirable convexity structure of the system and has a faster convergence rate compared to ordinary Monte Carlo methods. In particular, we show that under moderate assumptions on the decay of the input random field, the error rate obtained by using a specially designed, randomly shifted rank-1 lattice quadrature rule is essentially inversely proportional to the number of quadrature nodes. The overall discretization error of the problem, consisting of the dimension truncation error, finite element discretization error and quasi-Monte Carlo quadrature error, is derived in detail. We assess the theoretical findings in numerical experiments

    Simple Monte Carlo and the metropolis algorithm

    Get PDF
    We study the integration of functions with respect to an unknown density. Information is available as oracle calls to the integrand and to the non-normalized density function. We are interested in analyzing the integration error of optimal algorithms (or the complexity of the problem) with emphasis on the variability of the weight function. For a corresponding large class of problem instances we show that the complexity grows linearly in the variability, and the simple Monte Carlo method provides an almost optimal algorithm. Under additional geometric restrictions (mainly log-concavity) for the density functions, we establish that a suitable adaptive local Metropolis algorithm is almost optimal and outperforms any non-adaptive algorithm

    BER Performance of IM/DD FSO System with OOK using APD Receiver

    Get PDF
    In this paper, the performance of intensity-modulated with direct detection (IM/DD) free space optical (FSO) system using the on-off keying (OOK) and avalanche photodiode (APD) receiver is observed. The gamma-gamma model is used to describe the effect of atmospheric turbulence since it provides good agreement in the wide range of atmospheric conditions. In addition, the same FSO system with equal gain combining applied at the reception is analyzed. After theoretical derivation of the expression for the bit error rate (BER), the numerical integration with previously specified relative calculation error is performed. Numerical results are presented and confirmed by Monte Carlo simulations. The effects of the FSO link and receiver parameters on the BER performance are discussed. The results illustrate that the optimal APD gain in the minimum BER sense depends considerably on the link distance, atmospheric turbulence strength and receiver temperature. In addition, the value of this optimal gain is slightly different in the case of spatial diversity application compared with single channel reception

    Multilevel Double Loop Monte Carlo and Stochastic Collocation Methods with Importance Sampling for Bayesian Optimal Experimental Design

    Full text link
    An optimal experimental set-up maximizes the value of data for statistical inferences and predictions. The efficiency of strategies for finding optimal experimental set-ups is particularly important for experiments that are time-consuming or expensive to perform. For instance, in the situation when the experiments are modeled by Partial Differential Equations (PDEs), multilevel methods have been proven to dramatically reduce the computational complexity of their single-level counterparts when estimating expected values. For a setting where PDEs can model experiments, we propose two multilevel methods for estimating a popular design criterion known as the expected information gain in simulation-based Bayesian optimal experimental design. The expected information gain criterion is of a nested expectation form, and only a handful of multilevel methods have been proposed for problems of such form. We propose a Multilevel Double Loop Monte Carlo (MLDLMC), which is a multilevel strategy with Double Loop Monte Carlo (DLMC), and a Multilevel Double Loop Stochastic Collocation (MLDLSC), which performs a high-dimensional integration by deterministic quadrature on sparse grids. For both methods, the Laplace approximation is used for importance sampling that significantly reduces the computational work of estimating inner expectations. The optimal values of the method parameters are determined by minimizing the average computational work, subject to satisfying the desired error tolerance. The computational efficiencies of the methods are demonstrated by estimating the expected information gain for Bayesian inference of the fiber orientation in composite laminate materials from an electrical impedance tomography experiment. MLDLSC performs better than MLDLMC when the regularity of the quantity of interest, with respect to the additive noise and the unknown parameters, can be exploited
    corecore