17 research outputs found

    Equitable (d,m)(d,m)-edge designs

    Get PDF
    The paper addresses design of experiments for classifying the input factors of a multi-variate function into negligible, linear and other (non-linear/interaction) factors. We give constructive procedures for completing the definition of the clustered designs proposed Morris 1991, that become defined for arbitrary number of input factors and desired clusters' multiplicity. Our work is based on a representation of subgraphs of the hyper-cube by polynomials that allows the formal verification of the designs' properties. Ability to generate these designs in a systematic manner opens new perspectives for the characterisation of the behaviour of the function's derivatives over the input space that may offer increased discrimination

    Model-free spatial Interpolation and error prediction for survey data acquired by mobile platforms

    Get PDF
    International audienceThe paper proposes a new randomized Cross Validation (CV) criterion specially designed for use with data acquired over non-uniformly scattered designs, like the linear transect surveys typical in environmental observation. Numerical results illustrate the impact of randomized cross-validation in real environmental datasets showing that it leads to interpolated fields with smaller error at a much lower computational load. Randomized CV enables a robust parameterization of interpolation algorithms, in a manner completely driven by the data and free of any modelling assumptions. The new method proposed here resorts to tools and concepts from Computational Geometry, in particular the Yao graph determined by the set of sampled sites. The method randomly chooses the hold-out sets such that they reflect, statistically, the geometry of the design with respect to the unobserved points of the area where the observations are to be extrapolated, minimizing biases due to the particular geometry of the designs

    Extending Morris Method: identification of the interaction graph using cycle-equitabe designs

    Get PDF
    International audienceThe paper presents designs that allow detection of mixed effects when performing preliminary screening of the inputs of a scalar function of dd input factors, in the spirit of Morris' Elementary Effects approach. We introduce the class of (d,c)(d,c)-cycle equitable designs as those that enable computation of exactly cc second order effects on all possible pairs of input factors. Using these designs, we propose a fast Mixed Effects screening method, that enables efficient identification of the interaction graph of the input variables. Design definition is formally supported on the establishment of an isometry between sub-graphs of the unit cube QdQ_d equipped of the Manhattan metric, and a set of polynomials in (X1,…,Xd)(X_1,\ldots, X_d) on which a convenient inner product is defined. In the paper we present systems of equations that recursively define these (d,c)(d,c)-cycle equitable designs for generic values of c≥1c\geq 1, from which direct algorithmic implementations are derived. Application cases are presented, illustrating the application of the proposed designs to the estimation of the interaction graph of specific functions

    Learning Probabilistic Models of Contours

    No full text

    Bayesian Local Kriging

    No full text
    <p>We consider the problem of constructing metamodels for computationally expensive simulation codes; that is, we construct interpolators/predictors of functions values (responses) from a finite collection of evaluations (observations). We use Gaussian process (GP) modeling and kriging, and combine a Bayesian approach, based on a finite set GP models, with the use of localized covariances indexed by the point where the prediction is made. Our approach is not based on postulating a generative model for the unknown function, but by letting the covariance functions depend on the prediction site, it provides enough flexibility to accommodate arbitrary nonstationary observations. Contrary to kriging prediction with plug-in parameter estimates, the resulting Bayesian predictor is constructed explicitly, without requiring any numerical optimization, and locally adjusts the weights given to the different models according to the data variability in each neighborhood. The predictor inherits the smoothness properties of the covariance functions that are used and its superiority over plug-in kriging, sometimes also called empirical-best-linear-unbiased predictor, is illustrated on various examples, including the reconstruction of an oceanographic field over a large region from a small number of observations. Supplementary materials for this article are available online.</p

    Sparse spline-based shape models

    No full text
    NICE-BU Sciences (060882101) / SudocSudocFranceF

    Adaptive design criteria motivated by a plug-in percentile estimator

    No full text
    Increasingly complex numerical models are involved in a variety of modern engineering applications, ranging from evaluation of environmental risks to optimisation of sophisticated industrial processes. Study of climat change is an extremely well-known example, while its current use in other domains like pharmaceutics (the so-called in vitro experiments), aeronautics or even cosmetics are less well known of the general public. These models allow the prediction of a number of variables of interest for a given configuration of a number of factors that potentially affect them. Complex models depend in general on a large number of such factors, and their execution time may range from a couple of hours to several days. In many cases, collectively falling in the domain of risk analysis, the interest is in identifying how often, under what conditions, or how strongly, a certain phenomenon may happen. In addition to the numerical model that predicts the variable of interest, it is then necessary to define a probabilis-tic structure in the set of its input factors, most often using a frequenciest approach. "How often" requires then the evaluation of the probability of occurence of the event of interest, while "how strongly" implies the determination of the set of the most extreme possible situations. In the former case we face a problem of estimation of an exceedance probability, while in latter is usually referred to as percentile estimation. For instance, in a study of the risk of flooding in a given coastal region, in the first case we want to estimate the probability α that a certain level of inundation η will not be exceeded, while in the second we are interest in the inundation level η that, with probability α, is not exceeded. In the context of the current planetary concer
    corecore