202 research outputs found
COOPER-framework: A Unified Standard Process for Non-parametric Projects
Practitioners assess performance of entities in increasingly large and complicated datasets. If non-parametric models, such as Data Envelopment Analysis, were ever considered as simple push-button technologies, this is impossible when many variables are available or when data have to be compiled from several sources. This paper introduces by the âCOOPER-frameworkâ a comprehensive model for carrying out non-parametric projects. The framework consists of six interrelated phases: Concepts and objectives, On structuring data, Operational models, Performance comparison model, Evaluation, and Result and deployment. Each of the phases describes some necessary steps a researcher should examine for a well defined and repeatable analysis. The COOPER-framework provides for the novice analyst guidance, structure and advice for a sound non-parametric analysis. The more experienced analyst benefits from a check list such that important issues are not forgotten. In addition, by the use of a standardized framework non-parametric assessments will be more reliable, more repeatable, more manageable, faster and less costly.DEA, non-parametric efficiency, unified standard process, COOPER-framework.
A bi-objective weighted model for improving the discrimination power in MCDEA
This is the author accepted manuscript. The final version is available from Elsevier via the DOI in this recordLack of discrimination power and poor weight dispersion remain major issues in Data Envelopment Analysis (DEA). Since the initial multiple criteria DEA (MCDEA) model developed in the late 1990s, only goal programming approaches; that is, the GPDEA-CCR and GPDEA-BCC were introduced for solving the said problems in a multi-objective framework. We found GPDEA models to be invalid and demonstrate that our proposed bi-objective multiple criteria DEA (BiO-MCDEA) outperforms the GPDEA models in the aspects of discrimination power and weight dispersion, as well as requiring less computational codes. An application of energy dependency among 25 European Union member countries is further used to describe the efficacy of our approach. © 2013 Elsevier B.V. All rights reserved.US
An allocation Malmquist index with an application in the China securities industry
This paper proposes an allocation Malmquist index which is inspired by the work on the non-parametric cost Malmquist index. We first show that how to decompose the cost Malmquist index into the input-oriented Malmquist index and the allocation Malmquist index. An application in corporate management of the China securities industry with the panel data set of 40 securities companies during the period 2005â2011 shows the practicality of the propose model
A spatiotemporal Data Envelopment Analysis (S-T DEA) approach:the need to assess evolving units
One of the major challenges in measuring efficiency in terms of resources and outcomes is the assessment of the evolution of units over time. Although Data Envelopment Analysis (DEA) has been applied for time series datasets, DEA models, by construction, form the reference set for inefficient units (lambda values) based on their distance from the efficient frontier, that is, in a spatial manner. However, when dealing with temporal datasets, the proximity in time between units should also be taken into account, since it reflects the structural resemblance among time periods of a unit that evolves. In this paper, we propose a two-stage spatiotemporal DEA approach, which captures both the spatial and temporal dimension through a multi-objective programming model. In the first stage, DEA is solved iteratively extracting for each unit only previous DMUs as peers in its reference set. In the second stage, the lambda values derived from the first stage are fed to a Multiobjective Mixed Integer Linear Programming model, which filters peers in the reference set based on weights assigned to the spatial and temporal dimension. The approach is demonstrated on a real-world example drawn from software development
Chance-constrained cost efficiency in data envelopment analysis model with random inputs and outputs
The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI linkData envelopment analysis (DEA) is a well-known non-parametric technique primarily used to estimate radial efficiency under a set of mild assumptions regarding the production possibility set and the production function. The technical efficiency measure can be complemented with a consistent radial metrics for cost, revenue and profit efficiency in DEA, but only for the setting with known input and output prices. In many real applications of performance measurement, such as the evaluation of utilities, banks and supply chain operations, the input and/or output data are often stochastic and linked to exogenous random variables. It is known from standard results in stochastic programming that rankings of stochastic functions are biased if expected values are used for key parameters. In this paper, we propose economic efficiency measures for stochastic data with known input and output prices. We transform the stochastic economic efficiency models into a deterministic equivalent non-linear form that can be simplified to a deterministic programming with quadratic constraints. An application for a cost minimizing planning problem of a state government in the US is presented to illustrate the applicability of the proposed framework
Recommended from our members
Big data optimization in electric power systems: a review
There are different definitions of big data, and among them, the most common definition refers
to three or five characteristics, called volume, velocity, variety, value, and veracity from (Laney
(2001)). Volume could include Tera Byte, Peta Byte, Exa Byte, and Zetta Byte. Velocity
describes how fast the data are retrieved and processed ââBatch or streamingâ. Variety describes
structured, semi-structured, and unstructured data (Laney, 2001, Zikopoulos and Eaton, 2011).
Veracity explains the integrity and disorderliness of data, while value refers to how good is the
âvalueâ we derive from analyzing data? (Zicari et al., 2016).
Electrical power systems are networks of components arrayed to supply, transfer, and use
electric power. In power system since models are used to predict and characterize operations.
However, there is a necessity for powerful optimization algorithms for information processing to
learn models as the size increase of data is becoming a global problem to solve large-scale
optimization problems. Any optimization problem includes a real function to be maximized or
minimized by systematically determination of input values from an allowed set of values.
Richness and quantity of large data sets provide the potential to enhance statistical learning
performance but require smart models that use the latent low-dimensional structure for effective
2
data separation.
This chapter reviews the most recent scientific articles related to large and big data optimization
in power systems. Optimization issues such as logistics in power systems and techniques
including nonsmooth, nonconvex, and unconstrained large-scale optimization are presented.
After a brief review of big data, scientometric analysis has been applied using keywords of âbig
dataâ and âpower system.â Besides, keywords analysis, network visualization, journal map, and
bibliographic coupling analysis have been done to draw a path on big data works in power
system problems. Also, the most common useful techniques in large-scale optimization in power
system have been reviewed. At the end of this chapter, metaheuristic techniques in big data
optimization are reviewed to show that many efforts have been involved in big data optimization
in power system and systematically highlight some perspectives on big data optimization
Assessing the Relative Performance of Nurses Using Data Envelopment Analysis Matrix (DEAM)
Assessing employee performance is one of the most important issue in healthcare management services. Because of their direct relationship with patients, nurses are also the most influential hospital staff who play a vital role in providing healthcare services. In this paper, a novel Data Envelopment Analysis Matrix (DEAM) approach is proposed for assessing the performance of nurses based on relative efficiency. The proposed model consists of five input variables (including type of employment, work experience, training hours, working hours and overtime hours) and eight output variables (the outputs are amount of hours each nurse spend on each of the eight activities including documentation, medical instructions, wound care and patient drainage, laboratory sampling, assessment and control care, follow-up and counseling and para-clinical measures, attendance during visiting and discharge suction) have been tested on 30 nurses from the heart department of a hospital in Iran. After determining the relative efficiency of each nurse based on the DEA model, the nursesâ performance were evaluated in a DEAM format. As results the nurses were divided into four groups; superstars, potential stars, those who are needed to be trained effectively and question marks. Finally, based on the proposed approach, we have drawn some recommendations to policy makers in order to improve and maintain the performance of each of these groups. The proposed approach provides a practical framework for hospital managers so that they can assess the relative efficiency of nurses, plan and take steps to improve the quality of healthcare delivery
The state of the art development of AHP (1979-2017): A literature review with a social network analysis
Although many papers describe the evolution of the analytic hierarchy process (AHP), most adopt a subjective approach. This paper examines the pattern of development of the AHP research field using social network analysis and scientometrics, and identifies its intellectual structure. The objectives are: (i) to trace the pattern of development of AHP research; (ii) to identify the patterns of collaboration among authors; (iii) to identify the most important papers underpinning the development of AHP; and (iv) to discover recent areas of interest. We analyse two types of networks: social networks, that is, co-authorship networks, and cognitive mapping or the network of disciplines affected by AHP. Our analyses are based on 8441 papers published between 1979 and 2017, retrieved from the ISI Web of Science database. To provide a longitudinal perspective on the pattern of evolution of AHP, we analyse these two types of networks during the three periods 1979?1990, 1991?2001 and 2002?2017. We provide some basic statistics on AHP journals and researchers, review the main topics and applications of integrated AHPs and provide direction for future research by highlighting some open questions
The state of the art development of AHP (1979-2017): a literature review with a social network analysis
Although many papers describe the evolution of the analytic hierarchy process (AHP), most adopt a subjective approach. This paper examines the pattern of development of the AHP research field using social network analysis and scientometrics, and identifies its intellectual structure. The objectives are: (i) to trace the pattern of development of AHP research; (ii) to identify the patterns of collaboration among authors; (iii) to identify the most important papers underpinning the development of AHP; and (iv) to discover recent areas of interest. We analyse two types of networks: social networks, that is, co-authorship networks, and cognitive mapping or the network of disciplines affected by AHP. Our analyses are based on 8441 papers published between 1979 and 2017, retrieved from the ISI Web of Science database. To provide a longitudinal perspective on the pattern of evolution of AHP, we analyse these two types of networks during the three periods 1979â1990, 1991â2001 and 2002â2017. We provide some basic statistics on AHP journals and researchers, review the main topics and applications of integrated AHPs and provide direction for future research by highlighting some open questions
- âŠ