855 research outputs found

    Families of Linear Efficiency Programs based on Debreu's Loss Function

    Get PDF
    Gerard Debreu introduced a well known radial efficiency measure which he called a ñ€Ɠcoefficient of resource utilization.ñ€ He derived this scalar from a much less well known ñ€Ɠdead lossñ€ function that characterizes the monetary value sacrificed to inefficiency, and which is to be minimized subject to a normalization condition. We use Debreu’s loss function, together with a variety of normalization conditions, to generate several popular families of linear efficiency programs. Our methodology also can be employed to generate entirely new families of linear efficiency programs.

    Robust optimization in data envelopment analysis: extended theory and applications.

    Get PDF
    Performance evaluation of decision-making units (DMUs) via the data envelopment analysis (DEA) is confronted with multi-conflicting objectives, complex alternatives and significant uncertainties. Visualizing the risk of uncertainties in the data used in the evaluation process is crucial to understanding the need for cutting edge solution techniques to organizational decisions. A greater management concern is to have techniques and practical models that can evaluate their operations and make decisions that are not only optimal but also consistent with the changing environment. Motivated by the myriad need to mitigate the risk of uncertainties in performance evaluations, this thesis focuses on finding robust and flexible evaluation strategies to the ranking and classification of DMUs. It studies performance measurement with the DEA tool and addresses the uncertainties in data via the robust optimization technique. The thesis develops new models in robust data envelopment analysis with applications to management science, which are pursued in four research thrust. In the first thrust, a robust counterpart optimization with nonnegative decision variables is proposed which is then used to formulate new budget of uncertainty-based robust DEA models. The proposed model is shown to save the computational cost for robust optimization solutions to operations research problems involving only positive decision variables. The second research thrust studies the duality relations of models within the worst-case and best-case approach in the input \u2013 output orientation framework. A key contribution is the design of a classification scheme that utilizes the conservativeness and the risk preference of the decision maker. In the third thrust, a new robust DEA model based on ellipsoidal uncertainty sets is proposed which is further extended to the additive model and compared with imprecise additive models. The final thrust study the modelling techniques including goal programming, robust optimization and data envelopment to a transportation problem where the concern is on the efficiency of the transport network, uncertainties in the demand and supply of goods and a compromising solution to multiple conflicting objectives of the decision maker. Several numerical examples and real-world applications are made to explore and demonstrate the applicability of the developed models and their essence to management decisions. Applications such as the robust evaluation of banking efficiency in Europe and in particular Germany and Italy are made. Considering the proposed models and their applications, efficiency analysis explored in this research will correspond to the practical framework of industrial and organizational decision making and will further advance the course of robust management decisions

    The role of multiplier bounds in fuzzy data envelopment analysis

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.The non-Archimedean epsilon Δ is commonly considered as a lower bound for the dual input weights and output weights in multiplier data envelopment analysis (DEA) models. The amount of Δ can be effectively used to differentiate between strongly and weakly efficient decision making units (DMUs). The problem of weak dominance particularly occurs when the reference set is fully or partially defined in terms of fuzzy numbers. In this paper, we propose a new four-step fuzzy DEA method to re-shape weakly efficient frontiers along with revisiting the efficiency score of DMUs in terms of perturbing the weakly efficient frontier. This approach eliminates the non-zero slacks in fuzzy DEA while keeping the strongly efficient frontiers unaltered. In comparing our proposed algorithm to an existing method in the recent literature we show three important flaws in their approach that our method addresses. Finally, we present a numerical example in banking with a combination of crisp and fuzzy data to illustrate the efficacy and advantages of the proposed approach

    Solving DEA models in a single optimization stage: Can the non-Archimedean infinitesimal be replaced by a small finite epsilon?

    Get PDF
    Single-stage DEA models aim to assess the input or output radial efficiency of a decision making unit and potential mix inefficiency in a single optimization stage. This is achieved by incorporating the sum of input and output slacks, multiplied by a small (theoretically non-Archimedean infinitesimal) value epsilon in the envelopment model or, equivalently, by using this value as the lower bound on the input and output weights in the dual multiplier model. When this approach is used, it is common practice to select a very small value for epsilon. This is based on the expectation that, for a sufficiently small epsilon, the radial efficiency and optimal slacks obtained by solving the single-stage model should be approximately equal to their true values obtained by the two separate optimization stages. However, as well-known, selecting a small epsilon may lead to significant computational inaccuracies. In this paper we prove that there exists a threshold value, referred to as the effective bound, such that, if epsilon is smaller than this bound, the solution to the single-stage program is not approximate but precise (exactly the same as in the two-stage approach), provided there are no computational errors

    Robust optimization in data envelopment analysis: extended theory and applications.

    Get PDF
    Performance evaluation of decision-making units (DMUs) via the data envelopment analysis (DEA) is confronted with multi-conflicting objectives, complex alternatives and significant uncertainties. Visualizing the risk of uncertainties in the data used in the evaluation process is crucial to understanding the need for cutting edge solution techniques to organizational decisions. A greater management concern is to have techniques and practical models that can evaluate their operations and make decisions that are not only optimal but also consistent with the changing environment. Motivated by the myriad need to mitigate the risk of uncertainties in performance evaluations, this thesis focuses on finding robust and flexible evaluation strategies to the ranking and classification of DMUs. It studies performance measurement with the DEA tool and addresses the uncertainties in data via the robust optimization technique. The thesis develops new models in robust data envelopment analysis with applications to management science, which are pursued in four research thrust. In the first thrust, a robust counterpart optimization with nonnegative decision variables is proposed which is then used to formulate new budget of uncertainty-based robust DEA models. The proposed model is shown to save the computational cost for robust optimization solutions to operations research problems involving only positive decision variables. The second research thrust studies the duality relations of models within the worst-case and best-case approach in the input – output orientation framework. A key contribution is the design of a classification scheme that utilizes the conservativeness and the risk preference of the decision maker. In the third thrust, a new robust DEA model based on ellipsoidal uncertainty sets is proposed which is further extended to the additive model and compared with imprecise additive models. The final thrust study the modelling techniques including goal programming, robust optimization and data envelopment to a transportation problem where the concern is on the efficiency of the transport network, uncertainties in the demand and supply of goods and a compromising solution to multiple conflicting objectives of the decision maker. Several numerical examples and real-world applications are made to explore and demonstrate the applicability of the developed models and their essence to management decisions. Applications such as the robust evaluation of banking efficiency in Europe and in particular Germany and Italy are made. Considering the proposed models and their applications, efficiency analysis explored in this research will correspond to the practical framework of industrial and organizational decision making and will further advance the course of robust management decisions

    Consistent weight restrictions in data envelopment analysis

    Get PDF
    It has recently been shown that the incorporation of weight restrictions in models of data envelopment analysis (DEA) may induce free or unlimited production of output vectors in the underlying production technology, which is expressly disallowed by standard production assumptions. This effect may either result in an infeasible multiplier model with weight restrictions or remain undetected by normal efficiency computations. The latter is potentially troubling because even if the efficiency scores appear unproblematic, they may still be assessed in an erroneous model of production technology. Two approaches to testing the existence of free and unlimited production have recently been developed: computational and analytical. While the latter is more straightforward than the former, its application is limited only to unlinked weight restrictions. In this paper we develop several new analytical conditions for a larger class of unlinked and linked weight restrictions

    An Evaluation of Cross-Efficiency Methods, Applied to Measuring Warehouse Performance

    Get PDF
    In this paper method and practice of cross-efficiency calculation is discussed. The main methods proposed in the literature are tested not on a set of artificial data but on a realistic sample of input-output data of European ware- houses. The empirical results show the limited role which increasing automation investment and larger warehouse size have in increasing productive performance. The reason is the existence of decreasing returns to scale in the industry, resulting in sub-optimal scales and inefficiencies, regardless of the operational performance of the facilities. From the methodological perspective, and based on a multidimensional metric which considers the capability of the various methods to rank warehouses, their ease of implementation, and their robustness to sensitivity analyses, we conclude to the superiority of the classic Sexton et al. (1986) method over recently proposed, more sophisticated methods

    A computationally efficient procedure for data envelopment analysis.

    Get PDF
    This thesis is the final outcome of a project carried out for the UK's Department for Education and Skills (DfES). They were interested in finding a fast algorithm for solving a Data Envelopment Analysis (DEA) model to compare the relative efficiency of 13216 primary schools in England based on 9 input-output factors. The standard approach for solving a DEA model comparing n units (such as primary schools) based on m factors, requires solving 2n linear programming (LP) problems, each with m constraints and at least n variables. At m = 9 and n = 13216, it was proving to be difficult. The research reported in this thesis describes both theoretical and practical contributions to achieving faster computational performance. First we establish that in analysing any unit t only against some critically important units - we call them generators - we can either (a) complete its efficiency analysis, or (b) find a new generator. This is an important contribution to the theory of solution procedures of DEA. It leads to our new Generator Based Algorithm (GBA) which solves only n LPs of maximum size (m x k), where k is the number of generators. As k is a small percentage of n, GBA significantly improves computational performance in large datasets. Further, GBA is capable of solving all the commonly used DEA models including important extensions of the basic models such as weight restricted models. In broad outline, the thesis describes four themes. First, it provides a comprehensive critical review of the extant literature on the computational aspects of DEA. Second, the thesis introduces the new computationally efficient algorithm GBA. It solves the practical problem in 105 seconds. The commercial software used by the DfES, at best, took more than an hour and often took 3 to 5 hours making it impractical for model development work. Third, the thesis presents results of comprehensive computational tests involving GBA, Jose Dula's BuildHull - the best available DEA algorithm in the literature - and the standard approach. Dula's published result showing that BuildHull consistently outperforms the standard approach is confirmed by our experiments. It is also shown that GBA is consistently better than BuildHull and is a viable tool for solving large scale DBA problems. An interesting by-product of this work is a new closed-form solution to the important practical problem of finding strictly positive factor weights without explicit weight restrictions for what are known in the DEA literature as "extreme-efficient units". To date, the only other methods for achieving this require solving additional LPs or a pair of Mixed Integer Linear Programs

    Eficiency of Infrastructure: The Case of Container Ports

    Get PDF
    This paper gauges efficiency in container ports. Using non-parametric methods, we estimate efficiency frontiers based on information from 86 ports across the world. Three attractive features of the method are: 1) it is based on an aggregated measure of efficiency despite the existence of multiple inputs; 2) it does not assume particular input-output functional relationships; and 3) it does not rely on a-priori peer selection to construct the benchmark. Results show that the most ine_cient ports use inputs in excess of 20 to 40 percent. Since infrastructure costs represent about 40 percent of total maritime transport costs, these could be reduced by 12 percent by moving from the inefficient extreme of the distribution to the efficiency one.Container Ports, Efficiency Frontiers, Non-Parametric Methods
    • 

    corecore