2 research outputs found

    Evaluation of group decision making based on group preferences under a multi-criteria environment

    Get PDF
    Arrow’s impossibility theorem stated that no single group decision making (GDM) method is perfect, in other words, different GDM methods can produce different or even conflicting rankings. So, 1) how to evaluate GDM methods and 2) how to reconcile different or even conflicting rankings are two important and difficult problems in GDM process, which have not been fully studied. This paper aims to develop and propose a group decision-making consensus recognition model, named GDMCRM, to address these two problems in the evaluation of GDM methods under a multi-criteria environment in order to identify and achieve optimal group consensus. In this model, the ordinal and cardinal GDM methods are both implemented and studied in the process of evaluating the GDM methods. What’s more, this proposed model can reconcile different or even conflicting rankings generated by the eight GDM methods, based on empirical research on two real-life datasets: financial data of 12 urban commercial banks and annual report data of seven listed oil companies. The results indicate the proposed model not only can largely satisfy the group preferences of multiple stakeholders, but can also identify the best compromise solution from the opinion of all the participants involved in the group decision process. First published online 20 October 202

    Decision-Making Support for the Evaluation of Clustering Algorithms Based on MCDM

    No full text
    In many disciplines, the evaluation of algorithms for processing massive data is a challenging research issue. However, different algorithms can produce different or even conflicting evaluation performance, and this phenomenon has not been fully investigated. The motivation of this paper aims to propose a solution scheme for the evaluation of clustering algorithms to reconcile different or even conflicting evaluation performance. The goal of this research is to propose and develop a model, called decision-making support for evaluation of clustering algorithms (DMSECA), to evaluate clustering algorithms by merging expert wisdom in order to reconcile differences in their evaluation performance for information fusion during a complex decision-making process. The proposed model is tested and verified by an experimental study using six clustering algorithms, nine external measures, and four MCDM methods on 20 UCI data sets, including a total of 18,310 instances and 313 attributes. The proposed model can generate a list of algorithm priorities to produce an optimal ranking scheme, which can satisfy the decision preferences of all the participants. The results indicate our developed model is an effective tool for selecting the most appropriate clustering algorithms for given data sets. Furthermore, our proposed model can reconcile different or even conflicting evaluation performance to reach a group agreement in a complex decision-making environment
    corecore