7,898 research outputs found

    The role of multiplier bounds in fuzzy data envelopment analysis

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link.The non-Archimedean epsilon ε is commonly considered as a lower bound for the dual input weights and output weights in multiplier data envelopment analysis (DEA) models. The amount of ε can be effectively used to differentiate between strongly and weakly efficient decision making units (DMUs). The problem of weak dominance particularly occurs when the reference set is fully or partially defined in terms of fuzzy numbers. In this paper, we propose a new four-step fuzzy DEA method to re-shape weakly efficient frontiers along with revisiting the efficiency score of DMUs in terms of perturbing the weakly efficient frontier. This approach eliminates the non-zero slacks in fuzzy DEA while keeping the strongly efficient frontiers unaltered. In comparing our proposed algorithm to an existing method in the recent literature we show three important flaws in their approach that our method addresses. Finally, we present a numerical example in banking with a combination of crisp and fuzzy data to illustrate the efficacy and advantages of the proposed approach

    Contextualized property market models vs. Generalized mass appraisals: An innovative approach

    Get PDF
    The present research takes into account the current and widespread need for rational valuation methodologies, able to correctly interpret the available market data. An innovative automated valuation model has been simultaneously implemented to three Italian study samples, each one constituted by two-hundred residential units sold in the years 2016-2017. The ability to generate a "unique" functional form for the three different territorial contexts considered, in which the relationships between the influencing factors and the selling prices are specified by different multiplicative coefficients that appropriately represent the market phenomena of each case study analyzed, is the main contribution of the proposed methodology. The method can provide support for private operators in the assessment of the territorial investment conveniences and for the public entities in the decisional phases regarding future tax and urban planning policies

    The Study and Optimization Of Production/Fermentation Processes In Biofuel Production

    Full text link
    The production process involved in the creation of biofuels consists of a number of operations and steps that require a meticulous understanding of the parameters and metrics. The production techniques again differ depending on the pre-treatment systems, source material, the methods used for extraction, types of nutrients used, cell cultures employed, time undertaken and temperature. Due to the strategic and crucial role that bioethanol holds in supporting the energy demands of the future, it becomes important to run such processes to a highly optimized extent. One of the frontiers of leading such optimized designs is by studying the data from the production processes, formulating design experiments from said data and correlating the results with the parameters using analytical tools. While the case examples analyzed relate to bioethanol mostly, an additional analysis has been performed for data on biodiesel. Coupled with confirmatory methods such as Principal Component Analysis, researchers can help narrow down the extent or degree to which the parameters affect the final outcome and even configure inputs that may not play a definitive role in greater outputs. The project first tackles through some conventional case studies involving biofuel production using an FIS(Fuzzy Interface System) and provides certain insights into the ways in which fuel yields can be enhanced depending on the particular cases. For the purpose of analysis, tools such as MATLAB, Python and WEKA have been employed. Python and WEKA have been used extensively in building principal component analysis reviews for the purpose of this project while MATLAB has been used for building the FIS models

    A review of convex approaches for control, observation and safety of linear parameter varying and Takagi-Sugeno systems

    Get PDF
    This paper provides a review about the concept of convex systems based on Takagi-Sugeno, linear parameter varying (LPV) and quasi-LPV modeling. These paradigms are capable of hiding the nonlinearities by means of an equivalent description which uses a set of linear models interpolated by appropriately defined weighing functions. Convex systems have become very popular since they allow applying extended linear techniques based on linear matrix inequalities (LMIs) to complex nonlinear systems. This survey aims at providing the reader with a significant overview of the existing LMI-based techniques for convex systems in the fields of control, observation and safety. Firstly, a detailed review of stability, feedback, tracking and model predictive control (MPC) convex controllers is considered. Secondly, the problem of state estimation is addressed through the design of proportional, proportional-integral, unknown input and descriptor observers. Finally, safety of convex systems is discussed by describing popular techniques for fault diagnosis and fault tolerant control (FTC).Peer ReviewedPostprint (published version

    A fuzzy DEA slacks-based approach

    Get PDF
    This paper deals with the problem of efficiency assessment using Data Envelopment Analysis (DEA) when the input and output data are given as fuzzy sets. In particular, a fuzzy extension of the measure of inefficiency proportions, a well-known slacks-based additive inefficiency measure, is considered. The proposed approach also provides fuzzy input and output targets. Computational experiences and comparison with other fuzzy DEA approaches are reported.The first author was partially supported by the research project MTM2017-89577-P (MINECO, Spain). The second author was partially supported by the Spanish Ministry of Economy and Competitiveness, grant AYA2016-75931-C2-1-P and from the Consejería de Educación y Ciencia, Spain (Junta de Andalucía, reference TIC-101). The third author acknowledges the financial support of the Spanish Ministry of Science, Innovation and Universities, grant PGC2018-095786-B-I00

    Linear fuzzy gene network models obtained from microarray data by exhaustive search

    Get PDF
    BACKGROUND: Recent technological advances in high-throughput data collection allow for experimental study of increasingly complex systems on the scale of the whole cellular genome and proteome. Gene network models are needed to interpret the resulting large and complex data sets. Rationally designed perturbations (e.g., gene knock-outs) can be used to iteratively refine hypothetical models, suggesting an approach for high-throughput biological system analysis. We introduce an approach to gene network modeling based on a scalable linear variant of fuzzy logic: a framework with greater resolution than Boolean logic models, but which, while still semi-quantitative, does not require the precise parameter measurement needed for chemical kinetics-based modeling. RESULTS: We demonstrated our approach with exhaustive search for fuzzy gene interaction models that best fit transcription measurements by microarray of twelve selected genes regulating the yeast cell cycle. Applying an efficient, universally applicable data normalization and fuzzification scheme, the search converged to a small number of models that individually predict experimental data within an error tolerance. Because only gene transcription levels are used to develop the models, they include both direct and indirect regulation of genes. CONCLUSION: Biological relationships in the best-fitting fuzzy gene network models successfully recover direct and indirect interactions predicted from previous knowledge to result in transcriptional correlation. Fuzzy models fit on one yeast cell cycle data set robustly predict another experimental data set for the same system. Linear fuzzy gene networks and exhaustive rule search are the first steps towards a framework for an integrated modeling and experiment approach to high-throughput "reverse engineering" of complex biological systems

    Fukunaga-Koontz feature transformation for statistical structural damage detection and hierarchical neuro-fuzzy damage localisation

    Get PDF
    Piotr Omenzetter and Simon Hoell’s work on this paper within the Lloyd’s Register Foundation Centre for Safety and Reliability Engineering at the University of Aberdeen was supported by Lloyd’s Register Foundation. The Foundation helps to protect life and property by supporting engineering-related education, public engagement and the application of research.Peer reviewedPostprin

    DEA-Based Incentive Regimes in Health-Care Provision

    Get PDF
    A major challenge to legislators, insurance providers and municipalities will be how to manage the reimbursement of health-care on partially open markets under increasing fiscal pressure and an aging population. Although efficiency theoretically can be obtained by private solutions using fixed-payment schemes, the informational rents and production distortions may limit their implementation. The healthcare agency problem is characterized by (i) a complex multi-input multi-output technology, (ii) information uncertainty and asymmetry, and (iii) fuzzy social preferences. First, the technology, inherently nonlinear and with externalities between factors, yield parametric estimation difficult. However, the flexible production structure in Data Envelopment Analysis (DEA) offers a solution that allows for the gradual and successive refinement of potentially nonconvex technologies. Second, the information structure of healthcare suggests a context of considerable asymmetric information and considerable uncertainty about the underlying technology, but limited uncertainty or noise in the registration of the outcome. Again, we shall argue that the DEA dynamic yardsticks (Bogetoft, 1994, 1997, Agrell and Bogetoft, 2001) are suitable for such contexts. A third important characteristic of the health sector is the somewhat fuzzy social priorities and the numerous potential conflicts between the stakeholders in the health system. Social preferences are likely dynamic and contingent on the disclosed information. Similarly, there are several potential hidden action (moral hazard) and hidden information (adverse selection) conflicts between the different agents in the health system. The flexible and transparent response to preferential ambiguity is one of the strongest justifications for a DEA-approach. DEA yardstick regimes have been successfully implemented in other sectors (electricity distribution) and we present an operalization of the power-parameter p in an pseudo-competitive setting that both limits the informational rents and incites the truthful revelation of information. Recent work (Agrell and Bogetoft, 2002) on strategic implementation of DEA yardsticks is commented in the healthcare context, where social priorities change the tradeoff between the motivation and coordination functions of the yardstick. The paper is closed with policy recommendations and some areas of further work.Data Envelopment Analysis, regulation, health care systems, efficiency, Health Economics and Policy,
    corecore