3 research outputs found

    A multiobjective optimization approach to compute the efficient frontier in data envelopment analysis

    Get PDF
    Data envelopment analysis is a linear programming-based operations research technique for performance measurement of decision-making units. In this paper, we investigate data envelopment analysis from a multiobjective point of view to compute both the efficient extreme points and the efficient facets of the technology set simultaneously. We introduce a dual multiobjective linear programming formulation of data envelopment analysis in terms of input and output prices and propose a procedure based on objective space algorithms for multiobjective linear programmes to compute the efficient frontier. We show that using our algorithm, the efficient extreme points and facets of the technology set can be computed without solving any optimization problems. We conduct computational experiments to demonstrate that the algorithm can compute the efficient frontier within seconds to a few minutes of computation time for real-world data envelopment analysis instances. For large-scale artificial data sets, our algorithm is faster than computing the efficiency scores of all decision-making units via linear programming

    A computationally efficient procedure for data envelopment analysis.

    Get PDF
    This thesis is the final outcome of a project carried out for the UK's Department for Education and Skills (DfES). They were interested in finding a fast algorithm for solving a Data Envelopment Analysis (DEA) model to compare the relative efficiency of 13216 primary schools in England based on 9 input-output factors. The standard approach for solving a DEA model comparing n units (such as primary schools) based on m factors, requires solving 2n linear programming (LP) problems, each with m constraints and at least n variables. At m = 9 and n = 13216, it was proving to be difficult. The research reported in this thesis describes both theoretical and practical contributions to achieving faster computational performance. First we establish that in analysing any unit t only against some critically important units - we call them generators - we can either (a) complete its efficiency analysis, or (b) find a new generator. This is an important contribution to the theory of solution procedures of DEA. It leads to our new Generator Based Algorithm (GBA) which solves only n LPs of maximum size (m x k), where k is the number of generators. As k is a small percentage of n, GBA significantly improves computational performance in large datasets. Further, GBA is capable of solving all the commonly used DEA models including important extensions of the basic models such as weight restricted models. In broad outline, the thesis describes four themes. First, it provides a comprehensive critical review of the extant literature on the computational aspects of DEA. Second, the thesis introduces the new computationally efficient algorithm GBA. It solves the practical problem in 105 seconds. The commercial software used by the DfES, at best, took more than an hour and often took 3 to 5 hours making it impractical for model development work. Third, the thesis presents results of comprehensive computational tests involving GBA, Jose Dula's BuildHull - the best available DEA algorithm in the literature - and the standard approach. Dula's published result showing that BuildHull consistently outperforms the standard approach is confirmed by our experiments. It is also shown that GBA is consistently better than BuildHull and is a viable tool for solving large scale DBA problems. An interesting by-product of this work is a new closed-form solution to the important practical problem of finding strictly positive factor weights without explicit weight restrictions for what are known in the DEA literature as "extreme-efficient units". To date, the only other methods for achieving this require solving additional LPs or a pair of Mixed Integer Linear Programs
    corecore