2,809 research outputs found

    COOPER-framework: A Unified Standard Process for Non-parametric Projects

    Get PDF
    Practitioners assess performance of entities in increasingly large and complicated datasets. If non-parametric models, such as Data Envelopment Analysis, were ever considered as simple push-button technologies, this is impossible when many variables are available or when data have to be compiled from several sources. This paper introduces by the ‘COOPER-framework’ a comprehensive model for carrying out non-parametric projects. The framework consists of six interrelated phases: Concepts and objectives, On structuring data, Operational models, Performance comparison model, Evaluation, and Result and deployment. Each of the phases describes some necessary steps a researcher should examine for a well defined and repeatable analysis. The COOPER-framework provides for the novice analyst guidance, structure and advice for a sound non-parametric analysis. The more experienced analyst benefits from a check list such that important issues are not forgotten. In addition, by the use of a standardized framework non-parametric assessments will be more reliable, more repeatable, more manageable, faster and less costly.DEA, non-parametric efficiency, unified standard process, COOPER-framework.

    Errors in Survey Based Quality Evaluation Variables in Efficiency Models of Primary Care Physicians

    Get PDF
    Efficiency analyses in the health care sector are often criticised for not incorporating quality variables. The definition of quality of primary health care has many aspects, and it is inevitably also a question of the patients’ perception of the services received. This paper uses variables derived from patient evaluation surveys as measures of the quality of the production of health care services. It uses statistical tests to judge if such measures have a significant impact on the use of resources in various Data Envelopment Analysis (DEA) models. As the use of survey data implies that the quality variables are measured with error, the assumptions underlying a DEA model are not strictly fulfilled. This paper focuses on ways of correcting for biases that might result from the violation of selected assumptions. Firstly, any selection bias in the patient mix of each physician is controlled for by regressing the patient evaluation responses on the patient characteristics. The corrected quality evaluation variables are entered as outputs in the DEA model, and model specification tests indicate that out of 25 different quality variables, only waiting time has a systematic impact on the efficiency results. Secondly, the effect on the efficiency estimates of the remaining sampling error in the patient sample for each physician is accounted for by constructing confidence intervals based on resampling. Finally, as an alternative approach to including the quality variables in the DEA model, a regression model finds different variables significant, but not always with a trade-of between quality and quantity.DEA; Health economics; Quality; Patient evaluation; Efficiency; Errors in variables; Resampling; Bootstrap; Selection bias; Sampling error

    ROBUSTNESS OF NON-PARAMETRIC MEASUREMENT OF EFFICIENCY AND RISK AVERSION

    Get PDF
    This paper examines the performance of a risk-adjusted non-parametric approach to measuring efficiency and risk aversion. Prior work is extended to the case where agent behavior is motivated by expected utility maximization. Results indicate the approach significantly outperforms traditional efficiency measurement methods when applied to risk averse agents.Risk and Uncertainty,

    Conditional Nonparametric Frontier Models for Convex and Non Convex Technologies: a Unifying Approach

    Get PDF
    The explanation of productivity differentials is very important to identify the economic conditions that create inefficiency and to improve managerial performance. In literature two main approaches have been developed: one-stage approaches and two-stage approaches. Daraio and Simar (2003) propose a full nonparametric methodology based on conditional FDH and conditional order-m frontiers without any convexity assumption on the technology. On the one hand, convexity has always been assumed in mainstream production theory and general equilibrium. On the other hand, in many empirical applications, the convexity assumption can be reasonable and sometimes natural. Leading by these considerations, in this paper we propose a unifying approach to introduce external-environmental variables in nonparametric frontier models for convex and non convex technologies. Developing further the work done in Daraio and Simar (2003) we introduce a conditional DEA estimator, i.e., an estimator of production frontier of DEA type conditioned to some external-environmental variables which are neither inputs nor outputs under the control of the producer. A robust version of this conditional estimator is also proposed. These various measures of efficiency provide also indicators of convexity. Illustrations through simulated and real data (mutual funds) examples are reported.Convexity, External-Environmental Factors, Production Frontier, Nonparametric Estimation, Robust Estimation.

    Performance of small and medium enterprises and the impact of environmental variables: evidence from Vietnam

    Get PDF
    thesis is developed from a real life application of performance evaluation of small and medium-sized enterprises (SMEs) in Vietnam. The thesis presents two main methodological developments on evaluation of dichotomous environment variable impacts on technical efficiency. Taking into account the selection bias the thesis proposes a revised frontier separation approach for the seminal Data Envelopment Analysis (DEA) model which was developed by Charnes, Cooper, and Rhodes (1981). The revised frontier separation approach is based on a nearest neighbour propensity score matching pairing treated SMEs with their counterfactuals on the propensity score. The thesis develops order-m frontier conditioning on propensity score from the conditional order-m approach proposed by Cazals, Florens, and Simar (2002), advocated by Daraio and Simar (2005). By this development, the thesis allows the application of the conditional order-m approach with a dichotomous environment variable taking into account the existence of the self-selection problem of impact evaluation. Monte Carlo style simulations have been built to examine the effectiveness of the aforementioned developments. Methodological developments of the thesis are applied in empirical studies to evaluate the impact of training programmes on the performance of food processing SMEs and the impact of exporting on technical efficiency of textile and garment SMEs of Vietnam. The analysis shows that training programmes have no significant impact on the technical efficiency of food processing SMEs. Moreover, the analysis confirms the conclusion of the export literature that exporters are self selected into the sector. The thesis finds no significant impact from exporting activities on technical efficiency of textile and garment SMEs. However, large bias has been eliminated by the proposed approach. Results of empirical studies contribute to the understanding of the impact of different environmental variables on the performance of SMEs. It helps policy makers to design proper policy supporting the development of Vietnamese SMEs

    Robust DEA efficiency scores: A probabilistic/combinatorial approach

    Get PDF
    In this paper we propose robust efficiency scores for the scenario in which the specification of the inputs/outputs to be included in the DEA model is modelled with a probability distribution. This proba- bilistic approach allows us to obtain three different robust efficiency scores: the Conditional Expected Score, the Unconditional Expected Score and the Expected score under the assumption of Maximum Entropy principle. The calculation of the three efficiency scores involves the resolution of an exponential number of linear problems. The algorithm presented in this paper allows to solve over 200 millions of linear problems in an affordable time when considering up 20 inputs/outputs and 200 DMUs. The approach proposed is illustrated with an application to the assessment of professional tennis players

    COOPER-framework:a unified process for non-parametric projects

    Get PDF
    Practitioners assess performance of entities in increasingly large and complicated datasets. If non-parametric models, such as Data Envelopment Analysis, were ever considered as simple push-button technologies, this is impossible when many variables are available or when data have to be compiled from several sources. This paper introduces by the 'COOPER-framework' a comprehensive model for carrying out non-parametric projects. The framework consists of six interrelated phases: Concepts and objectives, On structuring data, Operational models, Performance comparison model, Evaluation, and Result and deployment. Each of the phases describes some necessary steps a researcher should examine for a well defined and repeatable analysis. The COOPER-framework provides for the novice analyst guidance, structure and advice for a sound non-parametric analysis. The more experienced analyst benefits from a check list such that important issues are not forgotten. In addition, by the use of a standardized framework non-parametric assessments will be more reliable, more repeatable, more manageable, faster and less costly. © 2010 Elsevier B.V. All rights reserved

    Multi-Factor Policy Evaluation and Selection in the One-Sample Situation

    Get PDF
    Firms nowadays need to make decisions with fast information obsolesce. In this paper I deal with one class of decision problems in this situation, called the “one-sample” problems: we have finite options and one sample of the multiple criteria with which we use to evaluate those options. I develop evaluation procedures based on bootstrapping DEA (Data Envelopment Envelopment) and the related decision-making methods. This paper improves the bootstrap procedure proposed by Simar and Wilson (1998) and shows how to exploit information from bootstrap outputs for decision-making

    Robust semi-parametric inference for two-stage production models: A beta regression approach

    Get PDF
    The data envelopment analysis is related to a non-parametric mathematical tool used to assess the relative efficiency of productive units. In different studies on productive efficiency, it is common to employ semi-parametric procedures in two stages to determine whether any exogenous factors of interest affect the performance of productive units. However, some of these procedures, particularly those based on conventional statistical inference, generate inconsistent estimates when dealing with incoherent data-generating processes. This inconsistency arises due to the efficiency scores being limited to the unit interval, and the estimated scores often exhibit serial correlation and have limited observations. To address such inconsistency, several strategies have been suggested, with the most well-known being an algorithm based on a parametric bootstrap procedure using the truncated normal distribution and its regression model. In this work, we present a modification of this algorithm that utilizes the beta distribution and its regression structure. The beta model allows for better accommodation of asymmetry in the data distribution. Our proposed algorithm introduces inferential characteristics that are superior to the original algorithm, resulting in a more statistically coherent data-generating process and improving the consistency property. We have conducted computational experiments that demonstrate the improved results achieved by our proposal.ANCD -Agenția Națională pentru Cercetare și Dezvoltare(1200525

    An Alternative Approach to Reduce Dimensionality in Data Envelopment Analysis

    Get PDF
    Principal component analysis reduces dimensionality; however, uncorrelated components imply the existence of variables with weights of opposite signs. This complicates the application in data envelopment analysis. To overcome problems due to signs, a modification to the component axes is proposed and was verified using Monte Carlo simulations
    corecore