7,167 research outputs found

    Model selection via Bayesian information capacity designs for generalised linear models

    Get PDF
    The first investigation is made of designs for screening experiments where the response variable is approximated by a generalised linear model. A Bayesian information capacity criterion is defined for the selection of designs that are robust to the form of the linear predictor. For binomial data and logistic regression, the effectiveness of these designs for screening is assessed through simulation studies using all-subsets regression and model selection via maximum penalised likelihood and a generalised information criterion. For Poisson data and log-linear regression, similar assessments are made using maximum likelihood and the Akaike information criterion for minimally-supported designs that are constructed analytically. The results show that effective screening, that is, high power with moderate type I error rate and false discovery rate, can be achieved through suitable choices for the number of design support points and experiment size. Logistic regression is shown to present a more challenging problem than log-linear regression. Some areas for future work are also indicated

    Design Issues for Generalized Linear Models: A Review

    Full text link
    Generalized linear models (GLMs) have been used quite effectively in the modeling of a mean response under nonstandard conditions, where discrete as well as continuous data distributions can be accommodated. The choice of design for a GLM is a very important task in the development and building of an adequate model. However, one major problem that handicaps the construction of a GLM design is its dependence on the unknown parameters of the fitted model. Several approaches have been proposed in the past 25 years to solve this problem. These approaches, however, have provided only partial solutions that apply in only some special cases, and the problem, in general, remains largely unresolved. The purpose of this article is to focus attention on the aforementioned dependence problem. We provide a survey of various existing techniques dealing with the dependence problem. This survey includes discussions concerning locally optimal designs, sequential designs, Bayesian designs and the quantile dispersion graph approach for comparing designs for GLMs.Comment: Published at http://dx.doi.org/10.1214/088342306000000105 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Designs for generalized linear models with random block effects via information matrix approximations

    No full text
    The selection of optimal designs for generalized linear mixed models is complicated by the fact that the Fisher information matrix, on which most optimality criteria depend, is computationally expensive to evaluate. Our focus is on the design of experiments for likelihood estimation of parameters in the conditional model. We provide two novel approximations that substantially reduce the computational cost of evaluating the information matrix by complete enumeration of response outcomes, or Monte Carlo approximations thereof: (i) an asymptotic approximation which is accurate when there is strong dependence between observations in the same block; (ii) an approximation via Kriging interpolators. For logistic random intercept models, we show how interpolation can be especially effective for finding pseudo-Bayesian designs that incorporate uncertainty in the values of the model parameters. The new results are used to provide the first evaluation of the efficiency, for estimating conditional models, of optimal designs from closed-form approximations to the information matrix derived from marginal models. It is found that correcting for the marginal attenuation of parameters in binary-response models yields much improved designs, typically with very high efficiencies. However, in some experiments exhibiting strong dependence, designs for marginal models may still be inefficient for conditional modelling. Our asymptotic results provide some theoretical insights into why such inefficiencies occur

    The approximate coordinate exchange algorithm for Bayesian optimal design of experiments

    Get PDF
    Optimal Bayesian experimental design typically involves maximising the expectation, with respect to the joint distribution of parameters and responses, of some appropriately chosen utility function. This objective function is usually not available in closed form and the design space can be of high dimensionality. The approximate coordinate exchange algorithm is proposed for this maximisation problem where a Gaussian process emulator is used to approximate the objective function. The algorithm can be used for arbitrary utility functions meaning we can consider fully Bayesian optimal design. It can also be used for those utility functions that result in pseudo-Bayesian designs such as the popular Bayesian D-optimality. The algorithm is demonstrated on a range of examples

    Block designs for experiments with non-normal response

    No full text
    Many experiments measure a response that cannot be adequately described by a linear model withnormally distributed errors and are often run in blocks of homogeneous experimental units. Wedevelop the first methods of obtaining efficient block designs for experiments with an exponentialfamily response described by a marginal model fitted via Generalized Estimating Equations. Thismethodology is appropriate when the blocking factor is a nuisance variable as, for example, occursin industrial experiments. A D-optimality criterion is developed for finding designs robust to thevalues of the marginal model parameters and applied using three strategies: unrestricted algorithmicsearch, use of minimum-support designs, and blocking of an optimal design for the correspondingGeneralized Linear Model. Designs obtained from each strategy are critically compared and shownto be much more efficient than designs that ignore the blocking structure. The designs are comparedfor a range of values of the intra-block working correlation and for exchangeable, autoregressive andnearest neighbor structures. An analysis strategy is developed for a binomial response that allows es-timation from experiments with sparse data, and its efectiveness demonstrated. The design strategiesare motivated and demonstrated through the planning of an experiment from the aeronautics industr

    Robust designs for Poisson regression models

    Get PDF
    We consider the problem of how to construct robust designs for Poisson regression models. An analytical expression is derived for robust designs for first-order Poisson regression models where uncertainty exists in the prior parameter estimates. Given certain constraints in the methodology, it may be necessary to extend the robust designs for implementation in practical experiments. With these extensions, our methodology constructs designs which perform similarly, in terms of estimation, to current techniques, and offers the solution in a more timely manner. We further apply this analytic result to cases where uncertainty exists in the linear predictor. The application of this methodology to practical design problems such as screening experiments is explored. Given the minimal prior knowledge that is usually available when conducting such experiments, it is recommended to derive designs robust across a variety of systems. However, incorporating such uncertainty into the design process can be a computationally intense exercise. Hence, our analytic approach is explored as an alternative

    Bayesian sequential D-D optimal model-robust designs.

    Get PDF
    Alphabetic optimal design theory assumes that the model for which the optimal design is derived is usually known. However in real-life applications, this assumption may not be credible, as models are rarely known in advance. Therefore, optimal designs derived under the classical approach may be the best design but for the wrong assumed model. In this paper, we extend Neff's (1996) Bayesian two-stage approach to design experiments for the general linear model when initial knowledge of the model is poor. A Bayesian optimality procedure that works well under model uncertainty is used in the first stage and the second stage design is then generated from an optimality procedure that incorporates the improved model knowledge from the first stage. In this way, a Bayesian D-D optimal model robust design is developed. Results show that the Bayesian D-D optimal design is superior in performance to the classical one-stage D-optimal and the one-stage Bayesian D-optimal designs. We also investigate through a simulation study the ratio of sample sizes for the two stages and the minimum sample size desirable in the first stage.Applications; D-D optimality; Knowledge; Model; Two-stage procedure; Posterior probabilities;
    • 

    corecore