45 research outputs found

    06391 Abstracts Collection -- Algorithms and Complexity for Continuous Problems

    Get PDF
    From 24.09.06 to 29.09.06, the Dagstuhl Seminar 06391 ``Algorithms and Complexity for Continuous Problems\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. During the seminar, participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available

    Comparing different sampling schemes for approximating the integrals involved in the semi-Bayesian optimal design of choice experiments.

    Get PDF
    In conjoint choice experiments, the semi-Bayesian D-optimality criterion is often used to compute efficient designs. The traditional way to compute this criterion which involves multi-dimensional integrals over the prior distribution is to use Pseudo-Monte Carlo samples. However, other sampling approaches are available. Examples are the Quasi-Monte Carlo approach (randomized Halton sequences, modified Latin hypercube sampling and extensible shifted lattice points with Baker's transformation), the Gaussian-Hermite quadrature approach and a method using spherical-radial transformations. Not much is known in general about which sampling scheme performs best in constructing efficient choice designs. In this study, we compare the performance of these approaches under various scenarios. We try to identify the most efficient sampling scheme for each situation.Conjoint choice design; Pseudo-Monte Carlo; Quasi-Monte Carlo; Gaussian-Hermite quadrature; Spherical-radial transformation;

    Construction of lattice rules for multiple integration based on a weighted discrepancy

    Get PDF
    High-dimensional integrals arise in a variety of areas, including quantum physics, the physics and chemistry of molecules, statistical mechanics and more recently, in financial applications. In order to approximate multidimensional integrals, one may use Monte Carlo methods in which the quadrature points are generated randomly or quasi-Monte Carlo methods, in which points are generated deterministically. One particular class of quasi-Monte Carlo methods for multivariate integration is represented by lattice rules. Lattice rules constructed throughout this thesis allow good approximations to integrals of functions belonging to certain weighted function spaces. These function spaces were proposed as an explanation as to why integrals in many variables appear to be successfully approximated although the standard theory indicates that the number of quadrature points required for reasonable accuracy would be astronomical because of the large number of variables. The purpose of this thesis is to contribute to theoretical results regarding the construction of lattice rules for multiple integration. We consider both lattice rules for integrals over the unit cube and lattice rules suitable for integrals over Euclidean space. The research reported throughout the thesis is devoted to finding the generating vector required to produce lattice rules that have what is termed a low weighted discrepancy . In simple terms, the discrepancy is a measure of the uniformity of the distribution of the quadrature points or in other settings, a worst-case error. One of the assumptions used in these weighted function spaces is that variables are arranged in the decreasing order of their importance and the assignment of weights in this situation results in so-called product weights . In other applications it is rather the importance of group of variables that matters. This situation is modelled by using function spaces in which the weights are general . In the weighted settings mentioned above, the quality of the lattice rules is assessed by the weighted discrepancy mentioned earlier. Under appropriate conditions on the weights, the lattice rules constructed here produce a convergence rate of the error that ranges from O(nāˆ’1/2) to the (believed) optimal O(nāˆ’1+Ī“) for any Ī“ gt 0, with the involved constant independent of the dimension

    The AEP algorithm for the fast computation of the distribution of the sum of dependent random variables

    Get PDF
    We propose a new algorithm to compute numerically the distribution function of the sum of dd dependent, non-negative random variables with given joint distribution.Comment: Published in at http://dx.doi.org/10.3150/10-BEJ284 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    A Tool for Custom Construction of QMC and RQMC Point Sets

    Get PDF
    We present LatNet Builder, a software tool to find good parameters for lattice rules, polynomial lattice rules, and digital nets in base 2, for quasi-Monte Carlo (QMC) and randomized quasi-Monte Carlo (RQMC) sampling over the s-dimensional unit hypercube. The selection criteria are figures of merit that give different weights to different subsets of coordinates. They are upper bounds on the worst-case error (for QMC) or variance (for RQMC) for integrands rescaled to have a norm of at most one in certain Hilbert spaces of functions. We summarize what are the various Hilbert spaces, discrepancies, types of weights, figures of merit, types of constructions, and search methods supported by LatNet Builder. We briefly discuss its organization and we provide simple illustrations of what it can do.NSERC Discovery Grant, IVADO Grant, Corps des Mines Stipend, ERDF, ESF, EXP. 2019/0043

    Robust modeling and planning of radio-frequency identification network in logistics under uncertainties

    Get PDF
    To realize higher coverage rate, lower reading interference, and cost efficiency of radio-frequency identification networkin logistics under uncertainties, a novel robust radio-frequency identification network planning model is built and arobust particle swarm optimization is proposed. In radio-frequency identification network planning model, coverage isestablished by referring the probabilistic sensing model of sensor with uncertain sensing range; reading interference iscalculated by concentric mapā€“based Monte Carlo method; cost efficiency is described with the quantity of readers. Inrobust particle swarm optimization, a sampling method, the sampling size of which varies with iterations, is put forwardto improve the robustness of robust particle swarm optimization within limited sampling size. In particular, the exploita-tion speed in the prophase of robust particle swarm optimization is quickened by smaller expected sampling size; theexploitation precision in the anaphase of robust particle swarm optimization is ensured by larger expected sampling size.Simulation results show that, compared with the other three methods, the planning solution obtained by this work ismore conducive to enhance the coverage rate and reduce interference and cost.info:eu-repo/semantics/publishedVersio

    Dependence concepts and selection criteria for lattice rules

    Get PDF
    Lemieux recently proposed a new approach that studies randomized quasi-Monte Carlothrough dependency concepts. By analyzing the dependency structure of a rank-1 lattice,Lemieux proposed a copula-based criterion with which we can find a ā€œgood generatorā€ for the lattice. One drawback of the criterion is that it assumes that a given function can be well approximated by a bilinear function. It is not clear if this assumption holds in general. In this thesis, we assess the validity and robustness of the copula-based criterion. We dothis by working with bilinear functions, some practical problems such as Asian option pricing, and perfectly non-bilinear functions. We use the quasi-regression technique to study how bilinear a given function is. Beside assessing the validity of the bilinear assumption, we proposed the bilinear regression based criterion which combines the quasi-regression and the copula-based criterion. We extensively test the two criteria by comparing them to other well known criteria, such as the spectral test through numerical experiments. We find that the copula criterion can reduce the error size by a factor of 2 when the functionis bilinear. We also find that the copula-based criterion shows competitive results evenwhen a given function does not satisfy the bilinear assumption. We also see that our newly introduced BR criterion is competitive compared to well-known criteria

    Methods of Russian Patent Analysis

    Get PDF
    The article presents a method for extracting predicate-argument constructions characterizing the composition of the structural elements of the inventions and the relationships between them. The extracted structures are converted into a domain ontology and used in prior art patent search and information support of automated invention. The analysis of existing natural language processing (NLP) tools in relation to the processing of Russian-language patents has been carried out. A new method for extracting structured data from patents has been proposed taking into account the specificity of the text of patents and is based on the shallow parsing and segmentation of sentences. The value of the F1 metric for a rigorous estimate of data extraction is 63% and for a lax estimate is 79%. The results obtained suggest that the proposed method is promising
    corecore