8 research outputs found

    An extended multiple criteria data envelopment analysis model

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.Several researchers have adapted the data envelopment analysis (DEA) models to deal with two inter-related problems: weak discriminating power and unrealistic weight distribution. The former problem arises as an application of DEA in the situations where decision-makers seek to reach a complete ranking of units, and the latter problem refers to the situations in which basic DEA model simply rates units 100% efficient on account of irrational input and/or output weights and insufficient number of degrees of freedom. Improving discrimination power and yielding more reasonable dispersion of input and output weights simultaneously remain a challenge for DEA and multiple criteria DEA (MCDEA) models. This paper puts emphasis on weight restrictions to boost discriminating power as well as to generate true weight dispersion of MCDEA when a priori information about the weights is not available. To this end, we modify a very recent MCDEA models in the literature by determining an optimum lower bound for input and output weights. The contribution of this paper is sevenfold: first, we show that a larger amount for the lower bound on weights often leads to improving discriminating power and reaching realistic weights in MCDEA models due to imposing more weight restrictions; second, the procedure for sensitivity analysis is designed to define stability for the weights of each evaluation criterion; third, we extend a weighted MCDEA model to three evaluation criteria based on the maximum lower bound for input and output weights; fourth, we develop a super-efficiency model for efficient units under the proposed MCDEA model in this paper; fifth, we extend an epsilon-based minsum BCC-DEA model to proceed our research objectives under variable returns to scale (VRS); sixth, we present a simulation study to statistically analyze weight dispersion and rankings between five different methods in terms of non-parametric tests; and seventh, we demonstrate the applicability of the proposed models with an application to European Union member countries

    A computationally efficient procedure for data envelopment analysis.

    Get PDF
    This thesis is the final outcome of a project carried out for the UK's Department for Education and Skills (DfES). They were interested in finding a fast algorithm for solving a Data Envelopment Analysis (DEA) model to compare the relative efficiency of 13216 primary schools in England based on 9 input-output factors. The standard approach for solving a DEA model comparing n units (such as primary schools) based on m factors, requires solving 2n linear programming (LP) problems, each with m constraints and at least n variables. At m = 9 and n = 13216, it was proving to be difficult. The research reported in this thesis describes both theoretical and practical contributions to achieving faster computational performance. First we establish that in analysing any unit t only against some critically important units - we call them generators - we can either (a) complete its efficiency analysis, or (b) find a new generator. This is an important contribution to the theory of solution procedures of DEA. It leads to our new Generator Based Algorithm (GBA) which solves only n LPs of maximum size (m x k), where k is the number of generators. As k is a small percentage of n, GBA significantly improves computational performance in large datasets. Further, GBA is capable of solving all the commonly used DEA models including important extensions of the basic models such as weight restricted models. In broad outline, the thesis describes four themes. First, it provides a comprehensive critical review of the extant literature on the computational aspects of DEA. Second, the thesis introduces the new computationally efficient algorithm GBA. It solves the practical problem in 105 seconds. The commercial software used by the DfES, at best, took more than an hour and often took 3 to 5 hours making it impractical for model development work. Third, the thesis presents results of comprehensive computational tests involving GBA, Jose Dula's BuildHull - the best available DEA algorithm in the literature - and the standard approach. Dula's published result showing that BuildHull consistently outperforms the standard approach is confirmed by our experiments. It is also shown that GBA is consistently better than BuildHull and is a viable tool for solving large scale DBA problems. An interesting by-product of this work is a new closed-form solution to the important practical problem of finding strictly positive factor weights without explicit weight restrictions for what are known in the DEA literature as "extreme-efficient units". To date, the only other methods for achieving this require solving additional LPs or a pair of Mixed Integer Linear Programs

    Safety and Reliability - Safe Societies in a Changing World

    Get PDF
    The contributions cover a wide range of methodologies and application areas for safety and reliability that contribute to safe societies in a changing world. These methodologies and applications include: - foundations of risk and reliability assessment and management - mathematical methods in reliability and safety - risk assessment - risk management - system reliability - uncertainty analysis - digitalization and big data - prognostics and system health management - occupational safety - accident and incident modeling - maintenance modeling and applications - simulation for safety and reliability analysis - dynamic risk and barrier management - organizational factors and safety culture - human factors and human reliability - resilience engineering - structural reliability - natural hazards - security - economic analysis in risk managemen

    A comparison of the CAR and DAGAR spatial random effects models with an application to diabetics rate estimation in Belgium

    Get PDF
    When hierarchically modelling an epidemiological phenomenon on a finite collection of sites in space, one must always take a latent spatial effect into account in order to capture the correlation structure that links the phenomenon to the territory. In this work, we compare two autoregressive spatial models that can be used for this purpose: the classical CAR model and the more recent DAGAR model. Differently from the former, the latter has a desirable property: its ρ parameter can be naturally interpreted as the average neighbor pair correlation and, in addition, this parameter can be directly estimated when the effect is modelled using a DAGAR rather than a CAR structure. As an application, we model the diabetics rate in Belgium in 2014 and show the adequacy of these models in predicting the response variable when no covariates are available

    A Statistical Approach to the Alignment of fMRI Data

    Get PDF
    Multi-subject functional Magnetic Resonance Image studies are critical. The anatomical and functional structure varies across subjects, so the image alignment is necessary. We define a probabilistic model to describe functional alignment. Imposing a prior distribution, as the matrix Fisher Von Mises distribution, of the orthogonal transformation parameter, the anatomical information is embedded in the estimation of the parameters, i.e., penalizing the combination of spatially distant voxels. Real applications show an improvement in the classification and interpretability of the results compared to various functional alignment methods
    corecore