16 research outputs found

    A Monte Carlo Study of Old and New Frontier Methods for Efficiency Measurement

    Get PDF
    This study presents the results of an extensive Monte Carlo experiment to compare different methods of efficiency analysis. In addition to traditional parametric-stochastic and nonparametric-deterministic methods recently developed robust nonparametric-stochastic methods are considered. The experimental design comprises a wide variety of situations with different returns-to-scale regimes, substitution elasticities and outlying observations. As the results show, the new robust nonparametric-stochastic methods should not be used without cross-checking by other methods like stochastic frontier analysis or data envelopment analysis. These latter methods appear quite robust in the experiments.Monte Carlo experiment, efficiency measurement, nonparametric stochastic methods

    A Monte Carlo Study of Old and New Frontier Methods for Efficiency Measurement

    Get PDF
    This study presents the results of an extensive Monte Carlo experiment to compare different methods of efficiency analysis. In addition to traditional parametric-stochastic and nonparametric-deterministic methods recently developed robust nonparametric-stochastic methods are considered. The experimental design comprises a wide variety of situations with different returns-to-scale regimes, substitution elasticities and outlying observations. As the results show, the new robust nonparametric-stochastic methods should not be used without cross-checking by other methods like stochastic frontier analysis or data envelopment analysis. These latter methods appear quite robust in the experiments

    Environmental factors in frontier estimation - A Monte Carlo analysis

    Get PDF
    We compare three recently developed frontier estimators, namely the conditional DEA (Daraio and Simar, 2005; 2007b), the latent class SFA (Greene, 2005; Orea and Kumbhakar, 2004), and the StoNEZD approach (Johnson and Kuosmanen, 2011) by means of Monte Carlo simulation. We focus on their ability to identify production frontiers and efficiency rankings in the presence of environmental factors. Our simulations match features of real life datasets and cover a wide range of scenarios with variations in sample size, distribution of noise and inefficiency, as well as in distributions, intensity, and number of environmental variables. Our results provide insight in the finite sample properties of the estimators, while also identifying estimator-specific characteristics. Overall, the latent class approach is found to perform best, although in many cases StoNEZD shows a similar performance. Performance of cDEA is most often inferior

    Analyzing the accuracy of variable returns to scale data envelopment analysis models

    Get PDF
    The data envelopment analysis (DEA) model is extensively used to estimate efficiency, but no study has determined the DEA model that delivers the most precise estimates. To address this issue, we advance the Monte Carlo simulation-based data generation process proposed by Kohl and Brunner (2020). The developed process generates an artificial dataset using the Translog production function (instead of the commonly used Cobb Douglas) to construct well-behaved scenarios under variable returns to scale (VRS). Using different VRS DEA models, we compute DEA efficiency scores with artificially generated decision-making units (DMUs). We employ five performance indicators followed by a benchmark value and ranking as well as statistical hypothesis tests to evaluate the quality of the efficiency estimates. The procedure allows us to determine which parameters negatively or positively influence the quality of the DEA estimates. It also enables us to identify which DEA model performs the most efficiently over a wide range of scenarios. In contrast to the widely applied BCC (Banker-Charnes-Cooper) model, we find that the Assurance Region (AR) and Slacks-Based Measurement (SBM) DEA models perform better. Thus, we endorse the use of AR and SBM models for DEA applications under the VRS regime

    On the need for reform of the portuguese judicial system - does data envelopment analysis assessment support it?

    Get PDF
    The Portuguese judicial system has attracted considerable criticism in recent years and demands for reforms have gained prominence. By using the Data Envelopment Analysis technique and focusing on the performance of 223 Portuguese first instance courts during the period of 2007 to 2011, this research has found evidence that supports some of this criticism and justifies the calls for reforms, better performance and accountability of the judicial system. In particular, our results found a sector with considerable scope for improvement with less than 16 percent of the 223 courts analysed making an efficient use of their resources in each year and with only one third of the courts being considered efficient in at least one of the five years assessed. Whilst the results suggest that improvement can be achieved with better case management, scale factors also seem to play an important role in explaining inefficiency, with most of the inefficient courts being smaller than optimal and with smaller courts being, on average, less efficient than larger ones. The existence of a statistically significant relationship between courts’ efficiency and size was confirmed by the Mann-Whitney test. These results indicate considerable scope for improvement and that some of the planned reforms are timely and seem well targeted. However, the results also suggest that efficiency increases matching peers’ best practices are not enough to sustainably reduce the prevailing judicial backlog and length of court proceedings in a considerable number of courts. Major changes in the capacity and/or functioning of the Portuguese judicial system might also be required

    FRONTEIRA DE PRODUÇÃO ESTOCÁSTICA NÃO PARAMÉTRICA: UMA ANÁLISE DA EFICIÊNCIA DO PODER JUDICIÁRIO ESTADUAL

    Get PDF
    The objective of this article is to estimate and model the technical efficiency of the Brazilian state’s judiciary. For that, a non-parametric stochastic production frontier is estimated based on a sample of data published by the National Council of Justice of Brazil. In the estimation of the production frontier, the number of magistrates, the number of servers, the number of processes in process and the costs of costing and investment were used as input. As for the product, the number of sentences handed down was used. In the modeling of efficiency, the proportion of criminal cases, the proportion of electronic processes, the proportion of judges in special courts and the average workload of magistrates were used as regressors. All non-parametric regressions estimated showed the existence of non-linear relations. It was also obtained that the workload of the magistrates presents a quadratic relation with the efficiency, in the form of a U; The proportion of special judges reaches maximum efficiency when it is equal to 25 %; The proportion of criminal cases reduced efficiency and electronic processes were statistically insignificant.O objetivo deste artigo é estimar e modelar a eficiência técnica do Poder Judiciário dos estados brasileiros. Para tanto, estima-se uma fronteira de produção estocástica não paramétrica com base em uma amostra de dados publicados pelo Conselho Nacional de Justiça. Na estimação da fronteira de produção são utilizados como insumos os números de magistrados, servidores, processos em tramitação e as despesas de custeio e de investimento. Quanto ao produto, utiliza-se o número de sentenças proferidas. Na modelagem da eficiência, são empregados como regressores as proporções de processos criminais, processos eletrônicos, juízes em juizados especiais e a carga de trabalho média dos magistrados. Todas as regressões não paramétricas estimadas evidenciam a existência de relações não lineares. Obtém-se também que a carga de trabalho dos magistrados apresenta uma relação quadrática com a eficiência, na forma de um “U”: a proporção de juízes especiais atinge a eficiência máxima quando é igual a 25%, a proporção de processos criminais reduz a eficiência e os processos eletrônicos se mostram estatisticamente não significativos

    Dealing with endogeneity in data envelopment analysis applications

    Get PDF
    Although the presence of the endogeneity is frequently observed in economic production processes, it tends to be overlooked when practitioners apply data envelopment analysis (DEA). In this paper we deal with this issue in two ways. First, we provide a simple statistical heuristic procedure that enables practitioners to identify the presence of endogeneity in an empirical application. Second, we propose the use of an instrumental input DEA (II-DEA) as a potential tool to address this problem and thus improve DEA estimations. A Monte Carlo experiment confirms that the proposed II-DEA approach outperforms standard DEA in finite samples under the presence of high positive endogeneity. To illustrate our theoretical findings, we perform an empirical application on the education sector

    A decision support system for assessing management interventions in a mental health ecosystem: The case of Bizkaia (Basque Country, Spain)

    Get PDF
    Evidence-informed strategic planning is a top priority in Mental Health (MH) due to the burden associated with this group of disorders and its societal costs. However, MH systems are highly complex, and decision support tools should follow a systems thinking approach that incorporates expert knowledge. The aim of this paper is to introduce a new Decision Support System (DSS) to improve knowledge on the health ecosystem, resource allocation and management in regional MH planning. The Efficient Decision Support-Mental Health (EDeS-MH) is a DSS that integrates an operational model to assess the Relative Technical Efficiency (RTE) of small health areas, a Monte-Carlo simulation engine (that carries out the Monte-Carlo simulation technique), a fuzzy inference engine prototype and basic statistics as well as system stability and entropy indicators. The stability indicator assesses the sensitivity of the model results due to data variations (derived from structural changes). The entropy indicator assesses the inner uncertainty of the results. RTE is multidimensional, that is, it was evaluated by using 15 variable combinations called scenarios. Each scenario, designed by experts in MH planning, has its own meaning based on different types of care. Three management interventions on the MH system in Bizkaia were analysed using key performance indicators of the service availability, placement capacity in day care, health care workforce capacity, and resource utilisation data of hospital and community care. The potential impact of these interventions has been assessed at both local and system levels. The system reacts positively to the proposals by a slight increase in its efficiency and stability (and its corresponding decrease in the entropy). However, depending on the analysed scenario, RTE, stability and entropy statistics can have a positive, neutral or negative behaviour. Using this information, decision makers can design new specific interventions/policies. EDeS-MH has been tested and face-validated in a real management situation in the Bizkaia MH system.The present research study is frameworked in the REFINEMENT Spain project (Project PI15/01986), funded by the Carlos III Health Institute (http://www.isciii.es/)

    Which estimator to measure local governments' cost efficiency? The case of Spanish municipalities

    Get PDF
    We analyse overall cost efficiency in Spanish local governments during the crisis period (2008-2015). To this end, we first consider some of the most popular nonparametric methods to evaluate local government efficiency, data envelopment analysis and free disposal hull, as well as recent proposals, namely the order-m partial frontier and the nonparametric estimator proposed by Kneip et al. (Econom Theory 24(6):1663-1697, 2008). Second, to compare the four methods and choose the most appropriate one for our particular context and dataset (local government cost efficiency in Spain), we carry out an experiment via Monte Carlo simulations and discuss the relative performance of the efficiency scores under various scenarios. Our results suggest that there is no one approach suitable for all efficiency analysis. We find that for our sample of 1846 Spanish local governments, the average cost efficiency would have been between 0.5417 and 0.7543 during the period 2008-2015, suggesting that Spanish local governments could have achieved the same level of local outputs with about 25% and 46% fewer resources
    corecore