3,081 research outputs found

    Morality of anesthesia and Analgesia in Childbirth

    Get PDF

    Turning the Table on Assessment: The Grantee Perception Report

    Get PDF
    This book chapter describes the origins of the GPR, illustrates lessons learned, and provides examples of changes made by foundations that have used this tool. It also reports on some of the broadly applicable insights gained from CEP's large-scale surveys of grantees. (This material is excerpted from the Grantmakers for Effective Organizations (GEO) book, A Funder's Guide to Organizational Assessment.

    Bayesian Analysis of Road Accidents: A General Framework for the Multinomial Case

    Get PDF
    The detection of dangerous road sites is usually performed using empirical methods which focus on observed accident frequencies and/or proportions of accidents with a given feature. The most widely used detection tools have an empirical Bayes (EB) background. The EB approaches rely on the comparison of frequencies and/or proportions of accidents at a given site with the amounts that would normally occur at similar sites. Currently, analytical techniques for accident proportions describe the number of accidents with a given feature using a binomial distribution. This paper extends to the multinomial case the general EB technique that we recently suggested to analyze road accident proportions. Our proposed approach is a full-information Bayes method that allows for both deterministic and random heterogeneity as well as spatial-correlation among the sites under investigation. The technique can also be used to analyze accident frequencies. An empirical example based on accident data taken from the QuĂ©bec city database, will serve to demonstrate its usefulness. Habituellement, la dĂ©tection des sites d'accidents routiers dangereux est effectuĂ©e Ă  partir de mĂ©thodes de bayes empiriques appliquĂ©es Ă  des taux d'accidents et/ou des proportions d'accidents qui se sont produits dans des conditions donnĂ©es. Ces mĂ©thodes comparent les taux et proportions observĂ©s avec ceux qui se produisent normalement dans un ensemble de sites routiers considĂ©rĂ©s comme semblables. Les approches existantes exploitent des lois de distribution binomiales. Dans le prĂ©sent article, nous dĂ©crivons une mĂ©thodologie gĂ©nĂ©rale Ă  information complĂšte pour analyser le niveau de danger des sites routiers, qui fait appel Ă  des distributions multinomiales. La technique proposĂ©e, de type bayĂ©sienne, permet de traiter simultanĂ©ment les problĂšmes d'hĂ©tĂ©rogĂ©nĂ©itĂ© dĂ©terministe et alĂ©atoire ainsi que celui de la corrĂ©lation spatiale attribuable Ă  la proximitĂ© ou l'environnement similaire caractĂ©risant les sites Ă  l'Ă©tude. Notre cadre mĂ©thodologique englobe des approches bayĂ©siennes de pratique courante qui Ă©tudient les proportions d'accidents impliquant une caractĂ©ristique donnĂ©e. Les propriĂ©tĂ©s et l'intĂ©rĂȘt de la nouvelle mĂ©thode sont dĂ©montrĂ©s Ă  l'aide d'un exemple simple basĂ© sur des donnĂ©es d'accidents de la ville de QuĂ©bec.

    Identification Robust Confidence Sets Methods for Inference on Parameter Ratios and their Application to Estimating Value-of-Time

    Get PDF
    The problem of constructing confidence set estimates for parameter ratios arises in a variety of econometrics contexts; these include value-of-time estimation in transportation research and inference on elasticities given several model specifications. Even when the model under consideration is identifiable, parameter ratios involve a possibly discontinuous parameter transformation that becomes ill-behaved as the denominator parameter approaches zero. More precisely, the parameter ratio is not identified over the whole parameter space: it is locally almost unidentified or (equivalently) weakly identified over a subset of the parameter space. It is well known that such situations can strongly affect the distributions of estimators and test statistics, leading to the failure of standard asymptotic approximations, as shown by Dufour. Here, we provide explicit solutions for projection-based simultaneous confidence sets for ratios of parameters when the joint confidence set is obtained through a generalized Fieller approach. A simulation study for a ratio of slope parameters in a simple binary probit model shows that the coverage rate of the Fieller's confidence interval is immune to weak identification whereas the confidence interval based on the delta-method performs poorly, even when the sample size is large. The procedures are examined in illustrative empirical models, with a focus on choice modelsconfidence interval; generalized Fieller's theorem; delta-method; weak identification; ratio of parameters.

    Random Covariance Heterogeneity in Discrete Choice Models

    Get PDF
    The area of discrete choice modelling has developed rapidly in recent years. In particular, continuing refinements of the Generalised Extreme Value (GEV) model family have permitted the representation of increasingly complex patterns of substitution and parallel advances in estimation capability have led to the increased use of model forms requiring simulation in estimation and application. One model form especially, namely the Mixed Multinomial Logit (MMNL) model, is being used ever more widely. Aside from allowing for random variations in tastes across decision-makers in a Random Coefficients Logit (RCL) framework, this model additionally allows for the representation of inter-alternative correlation as well as heteroscedasticity in an Error Components Logit (ECL) framework, enabling the model to approximate any Random Utility model arbitrarily closely. While the various developments discussed above have led to gradual gains in modelling flexibility, little effort has gone into the development of model forms allowing for a representation of heterogeneity across respondents in the correlation structure in place between alternatives. Such correlation heterogeneity is however possibly a crucial factor in the variation of choice-making behaviour across decision-makers, given the potential presence of individual-specific terms in the unobserved part of utility of multiple alternatives. To the authors' knowledge, there has so far only been one application of a model allowing for such heterogeneity, by Bhat (1997). In this Covariance NL model, the logsum parameters themselves are a function of socio-demographic attributes of the decision-makers, such that the correlation heterogeneity is explained with the help of these attributes. While the results by Bhat show the presence of statistically significant levels of covariance heterogeneity, the improvements in terms of model performance are almost negligible. While it is possible to interpret this as a lack of covariance heterogeneity in the data, another explanation is possible. It is clearly imaginable that a major part of the covariance heterogeneity cannot be explained in a deterministic fashion, either due to data limitations, or because of the presence of actual random variation, in a situation analogous to the case of random taste heterogeneity that cannot be explained in a deterministic fashion. In this paper, we propose two different ways of modelling such random variations in the correlation structure across individuals. The first approach is based on the use of an underlying GEV structure, while the second approach consists of an extension of the ECL model. In the former approach, the choice probabilities are given by integration of underlying GEV choice probabilities, such as Nested Logit, over the assumed distribution of the structural parameters. In the most basic specification, the structural parameters are specified as simple random variables, where appropriate choices of statistical distributions and/or mathematical transforms guarantee that the resulting structural parameters fall into the permissible range of values. Several extensions are then discussed in the paper that allow for a mixture of random and deterministic variations in the correlation structure. In an ECL model, correlation across alternatives is introduced with the help of normally distributed error-terms with a mean of zero that are shared by alternatives that are closer substitutes for each other, with the extent of correlation being determined by the estimates of the standard deviations of the error-components. The extension of this model to a structure allowing for random covariance heterogeneity is again divided into two parts. In the first approach, correlation is assumed to vary purely randomly; this is obtained through simple integration over the distribution of the standard deviations of the error-terms, superseding the integration over the distribution of the error-components with a specific draw for the standard deviations. The second extension is similar to the one used in the GEV case, with the standard deviations being composed of a deterministic term and a random term, either as a pure deviation, or in the form of random coefficients in the parameterisation of the distribution of the standard deviations. We next show that our Covariance GEV (CGEV) model generalises all existing GEV model structures, while the Covariance ECL (CECL) model can theoretically approximate all RUM models arbitrarily closely. Although this also means that the CECL model can closely replicate the behaviour of the CGEV model, there are some differences between the two models, which can be related to the differences in the underlying error-structure of the base models (GEV vs ECL). The CECL model has the advantage of implicitly allowing for heteroscedasticity, although this is also possible with the CGEV model, by adding appropriate error-components, leading to an EC-CGEV model. In terms of estimation, the CECL model has a run-time advantage for basic nesting structures, when the number of error-components, and hence dimensions of integration, is low enough not to counter-act the gains made by being based on a more straightforward integrand (MNL vs advanced GEV). However, in more complicated structures, this advantage disappears, in a situation that is analogous to the case of Mixed GEV models compared to ECL models. A final disadvantage of the CECL model structure comes in the form of an additional set of identification conditions. The paper presents applications of these model structures to both cross-sectional and panel datasets from the field of travel behaviour analysis. The applications illustrate the gains in model performance that can be obtained with our proposed structures when compared to models governed by a homogeneous covariance structure assumption. As expected, the gains in performance are more important in the case of data with repeated observations for the same individual, where the notion of individual-specific substitution patterns applies more directly. The applications also confirm the slight differences between the CGEV and CECL models discussed above. The paper concludes with a discussion of how the two structures can be extended to allow for random taste heterogeneity. The resulting models thus allow for random variations in choice behaviour both in the evaluation of measured attributes C as well as the correlation across alternatives in the unobserved utility terms. This further increases the flexibility of the two model structures, and their potential for analysing complex behaviour in transport and other areas of research.

    Secure information capacity of photons entangled in many dimensions

    Get PDF
    • 

    corecore