152 research outputs found

    Defuzzification of groups of fuzzy numbers using data envelopment analysis

    Get PDF
    Defuzzification is a critical process in the implementation of fuzzy systems that converts fuzzy numbers to crisp representations. Few researchers have focused on cases where the crisp outputs must satisfy a set of relationships dictated in the original crisp data. This phenomenon indicates that these crisp outputs are mathematically dependent on one another. Furthermore, these fuzzy numbers may exist as a group of fuzzy numbers. Therefore, the primary aim of this thesis is to develop a method to defuzzify groups of fuzzy numbers based on Charnes, Cooper, and Rhodes (CCR)-Data Envelopment Analysis (DEA) model by modifying the Center of Gravity (COG) method as the objective function. The constraints represent the relationships and some additional restrictions on the allowable crisp outputs with their dependency property. This leads to the creation of crisp values with preserved relationships and/or properties as in the original crisp data. Comparing with Linear Programming (LP) based model, the proposed CCR-DEA model is more efficient, and also able to defuzzify non-linear fuzzy numbers with accurate solutions. Moreover, the crisp outputs obtained by the proposed method are the nearest points to the fuzzy numbers in case of crisp independent outputs, and best nearest points to the fuzzy numbers in case of dependent crisp outputs. As a conclusion, the proposed CCR-DEA defuzzification method can create either dependent crisp outputs with preserved relationship or independent crisp outputs without any relationship. Besides, the proposed method is a general method to defuzzify groups or individuals fuzzy numbers under the assumption of convexity with linear and non-linear membership functions or relationships

    Complete Closest-Target Based Directional FDH Measures of Efficiency in DEA

    Get PDF
    In this paper, we aim to overcome three major shortcomings of the FDH (Free Disposal Hull) directional distance function through developing two new, named Linear and Fractional CDFDH, complete FDH measures of efficiency. To accomplish this, we integrate the concepts of similarity and FDH directional distance function. We prove that the proposed measures are translation invariant and unit invariant. In addition, we present effective enumeration algorithms to compute them. Our proposed measures have several practical advantages such as: (a) providing closest Pareto-efficient observed targets (b) incorporating the decision maker’s preference information into efficiency analysis and (c) being flexible in computer programming. We illustrate the newly developed approach with a real world data set

    A multiobjective optimization approach to compute the efficient frontier in data envelopment analysis

    Get PDF
    Data envelopment analysis is a linear programming-based operations research technique for performance measurement of decision-making units. In this paper, we investigate data envelopment analysis from a multiobjective point of view to compute both the efficient extreme points and the efficient facets of the technology set simultaneously. We introduce a dual multiobjective linear programming formulation of data envelopment analysis in terms of input and output prices and propose a procedure based on objective space algorithms for multiobjective linear programmes to compute the efficient frontier. We show that using our algorithm, the efficient extreme points and facets of the technology set can be computed without solving any optimization problems. We conduct computational experiments to demonstrate that the algorithm can compute the efficient frontier within seconds to a few minutes of computation time for real-world data envelopment analysis instances. For large-scale artificial data sets, our algorithm is faster than computing the efficiency scores of all decision-making units via linear programming

    Practical robust optimization techniques and improved inverse planning of HDR brachytherapy

    Get PDF

    Applications of simulation and optimization techniques in optimizing room and pillar mining systems

    Get PDF
    The goal of this research was to apply simulation and optimization techniques in solving mine design and production sequencing problems in room and pillar mines (R&P). The specific objectives were to: (1) apply Discrete Event Simulation (DES) to determine the optimal width of coal R&P panels under specific mining conditions; (2) investigate if the shuttle car fleet size used to mine a particular panel width is optimal in different segments of the panel; (3) test the hypothesis that binary integer linear programming (BILP) can be used to account for mining risk in R&P long range mine production sequencing; and (4) test the hypothesis that heuristic pre-processing can be used to increase the computational efficiency of branch and cut solutions to the BILP problem of R&P mine sequencing. A DES model of an existing R&P mine was built, that is capable of evaluating the effect of variable panel width on the unit cost and productivity of the mining system. For the system and operating conditions evaluated, the result showed that a 17-entry panel is optimal. The result also showed that, for the 17-entry panel studied, four shuttle cars per continuous miner is optimal for 80% of the defined mining segments with three shuttle cars optimal for the other 20%. The research successfully incorporated risk management into the R&P production sequencing problem, modeling the problem as BILP with block aggregation to minimize computational complexity. Three pre-processing algorithms based on generating problem-specific cutting planes were developed and used to investigate whether heuristic pre-processing can increase computational efficiency. Although, in some instances, the implemented pre-processing algorithms improved computational efficiency, the overall computational times were higher due to the high cost of generating the cutting planes --Abstract, page iii

    Essays on building and evaluating two-stage DEA models of efficiency and effectiveness

    Get PDF
    Researchers are not consistent in their choice of input and output variables when using two-stage data envelopment analysis (DEA) models to measure efficiency and effectiveness. This inconsistency has resulted in the development of many different two-stage DEA models of efficiency and effectiveness for the financial industry. In this dissertation, I improved the statistical method from the MASc dissertation (Attarwala, 2016) by adding more features. These features are documented in Chapter 2 on page 4 and page 5. This statistical method evaluates efficiency and effectiveness models in the banking industry. It relies on the semi-strong version of the efficient market hypothesis (EMH). The EMH is motivated by the wisdom of the crowds, discussed in Section 2.2.2. Previously (Attarwala, 2016), I found that the two-stage DEA model of Kumar and Gulati (2010) is not consistent with the semi-strong EMH for Indian and American banks. In this dissertation, using my improved statistical method, I show that the two-stage DEA model of Kumar and Gulati (2010) is not consistent with the semi-strong EMH for banks in Brazil, Canada, China, India, Japan, Mexico, South Korea and the USA from 2000- 2017. I address the question of whether a universal two-stage DEA model of efficiency and effectiveness exists by building a variable selection framework. This variable selection framework automatically generates two-stage DEA models of efficiency and effectiveness. To do this, it uses the improved statistical method and a genetic search (GS) algorithm. The variable selection framework finds the best, universal, two-stage DEA model of efficiency and effectiveness consistent with the semi-strong definition of EMH for banks in Brazil, Canada, China, India, Japan, Mexico, South Korea and the USA and from 2000-2017. I investigated the causal relationship between (a) the quantitative measures of efficiency and effectiveness from the best two-stage DEA model generated by the variable selection framework and (b) Tobin’s Q ratio, a financial market-based measure of bank performance. Not only do I provide bank managers with a reasonable proxy for measuring efficiency and effectiveness, but I also address the question of whether acting on these input and output variables improves the performance of banks in the financial market. Finally, I set up an optimization problem and find an optimal path from the two-stage DEA model of Kumar and Gulati (2010) to the best two-stage DEA model found by the variable selection framework. This optimal path provides a set of actionable items for converting a two-stage DEA model that is not consistent with the semi-strong EMH to one that is

    Operational Research: Methods and Applications

    Get PDF
    Throughout its history, Operational Research has evolved to include a variety of methods, models and algorithms that have been applied to a diverse and wide range of contexts. This encyclopedic article consists of two main sections: methods and applications. The first aims to summarise the up-to-date knowledge and provide an overview of the state-of-the-art methods and key developments in the various subdomains of the field. The second offers a wide-ranging list of areas where Operational Research has been applied. The article is meant to be read in a nonlinear fashion. It should be used as a point of reference or first-port-of-call for a diverse pool of readers: academics, researchers, students, and practitioners. The entries within the methods and applications sections are presented in alphabetical order. The authors dedicate this paper to the 2023 Turkey/Syria earthquake victims. We sincerely hope that advances in OR will play a role towards minimising the pain and suffering caused by this and future catastrophes
    • …
    corecore