98 research outputs found

    Inferring the Latent Incidence of Inefficiency from DEA Estimates and Bayesian Priors

    Get PDF
    Data envelopment analysis (DEA) is among the most popular empirical tools for measuring cost and productive efficiency. Because DEA is a linear programming technique, establishing formal statistical properties for outcomes is difficult. We show that the incidence of inefficiency within a population of Decision Making Units (DMUs) is a latent variable, with DEA outcomes providing only noisy sample-based categorizations of inefficiency. We then use a Bayesian approach to infer an appropriate posterior distribution for the incidence of inefficient DMUs based on a random sample of DEA outcomes and a prior distribution on the incidence of inefficiency. The methodology applies to both finite and infinite populations, and to sampling DMUs with and without replacement, and accounts for the noise in the DEA characterization of inefficiency within a coherent Bayesian approach to the problem. The result is an appropriately up-scaled, noise-adjusted inference regarding the incidence of inefficiency in a population of DMUs.Data Envelopment Analysis, latent inefficiency, Bayesian inference,Beta priors, posterior incidence of inefficiency

    Measuring the Quality of Life across Countries: A Sensitivity Analysis of Well-being Indices

    Get PDF
    quality of life, domains of quality of life, Borda rule, principal components analysis, well-being indices

    A Mixture Model of Consumers' Intended Purchase Decisions for Genetically Modified Foods

    Get PDF
    A finite probability mixture model is used to analyze the existence of multiple market segments for a pre-market good. The approach has at least two principal benefits. First, the model is capable of identifying likely market segments and their differentiating characteristics. Second, the model can be used to estimate the discount different consumer groups require to purchase the good. The model is illustrated using stated preference survey data collected on consumer responses to the potential introduction in Norway of bread made with genetically modified wheat.

    A Minimum Power Divergence Class of CDFs and Estimators for the Binary Choice Model

    Get PDF
    This paper uses information theoretic methods to introduce a new class of probability distributions and estimators for competing explanations of the data in the binary choice model. No explicit parameterization of the function connecting the data to the Bernoulli probabilities is stated in the specification of the statistical model. A large class of probability density functions emerges including the conventional logit model. The new class of statistical models and estimators requires minimal a priori model structure and non-sample information, and provides a range of model and estimator extensions. An empirical example is included to reflect the applicability of these methods

    An Information-Theoretic Approach to Estimating Willingness To Pay for River Recreation Site Attributes

    Get PDF
    This study applies an information theoretic econometric approach in the form of a new maximum likelihood-minimum power divergence (ML-MPD) semi-parametric binary response estimator to analyze dichotomous contingent valuation data. The ML-MPD method estimates the underlying behavioral decision process leading to a person’s willingness to pay for river recreation site attributes. Empirical choice probabilities, willingness to pay measures for recreation site attributes, and marginal effects of changes in some explanatory variables are estimated. For comparison purposes, a Logit model is also implemented. A Wald test of the symmetric logistic distribution underlying the Logit model is rejected at the 0.01 level in favor of the ML-MPD distribution model. Moreover, based on several goodness-of-fit measures we find that the ML-MPD is superior to the Logit model. Our results also demonstrate the potential for substantially overstating the precision of the estimates and associated inferences when the imposition of unknown structural information is not accounted explicitly for in the model. The ML-MPD model provides more intuitively reasonable and defensible results regarding the valuation of river recreation than the Logit model

    An Information-Theoretic Approach to Estimating Willingness To Pay for River Recreation Site Attributes

    Get PDF
    This study applies an information theoretic econometric approach in the form of a new maximum likelihood-minimum power divergence (ML-MPD) semi-parametric binary response estimator to analyze dichotomous contingent valuation data. The ML-MPD method estimates the underlying behavioral decision process leading to a person’s willingness to pay for river recreation site attributes. Empirical choice probabilities, willingness to pay measures for recreation site attributes, and marginal effects of changes in some explanatory variables are estimated. For comparison purposes, a Logit model is also implemented. A Wald test of the symmetric logistic distribution underlying the Logit model is rejected at the 0.01 level in favor of the ML-MPD distribution model. Moreover, based on several goodness-of-fit measures we find that the ML-MPD is superior to the Logit model. Our results also demonstrate the potential for substantially overstating the precision of the estimates and associated inferences when the imposition of unknown structural information is not accounted explicitly for in the model. The ML-MPD model provides more intuitively reasonable and defensible results regarding the valuation of river recreation than the Logit model

    Normalized entropy aggregation for inhomogeneous large-scale data

    Get PDF
    It was already in the fifties of the last century that the relationship between information theory, statistics, and maximum entropy was established, following the works of Kullback, Leibler, Lindley and Jaynes. However, the applications were restricted to very specific domains and it was not until recently that the convergence between information processing, data analysis and inference demanded the foundation of a new scientific area, commonly referred to as Info-Metrics. As huge amount of information and large-scale data have become available, the term "big data" has been used to refer to the many kinds of challenges presented in its analysis: many observations, many variables (or both), limited computational resources, different time regimes or multiple sources. In this work, we consider one particular aspect of big data analysis which is the presence of inhomogeneities, compromising the use of the classical framework in regression modelling. A new approach is proposed, based on the introduction of the concepts of info-metrics to the analysis of inhomogeneous large-scale data. The framework of information-theoretic estimation methods is presented, along with some information measures. In particular, the normalized entropy is tested in aggregation procedures and some simulation results are presented.publishe

    Mathematical statistics for economics and business

    No full text

    Mathematical statistics for economics and business

    No full text
    • 

    corecore