1,322 research outputs found
A systematic review on multi-criteria group decision-making methods based on weights: analysis and classification scheme
Interest in group decision-making (GDM) has been increasing prominently over the last decade. Access to global databases, sophisticated sensors which can obtain multiple inputs or complex problems requiring opinions from several experts have driven interest in data aggregation. Consequently, the field has been widely studied from several viewpoints and multiple approaches have been proposed. Nevertheless, there is a lack of general framework. Moreover, this problem is exacerbated in the case of expertsâ weighting methods, one of the most widely-used techniques to deal with multiple source aggregation. This lack of general classification scheme, or a guide to assist expert knowledge, leads to ambiguity or misreading for readers, who may be overwhelmed by the large amount of unclassified information currently available. To invert this situation, a general GDM framework is presented which divides and classifies all data aggregation techniques, focusing on and expanding the classification of expertsâ weighting methods in terms of analysis type by carrying out an in-depth literature review. Results are not only classified but analysed and discussed regarding multiple characteristics, such as MCDMs in which they are applied, type of data used, ideal solutions considered or when they are applied. Furthermore, general requirements supplement this analysis such as initial influence, or component division considerations. As a result, this paper provides not only a general classification scheme and a detailed analysis of expertsâ weighting methods but also a road map for researchers working on GDM topics or a guide for experts who use these methods. Furthermore, six significant contributions for future research pathways are provided in the conclusions.The first author acknowledges support from the Spanish Ministry of Universities [grant number FPU18/01471]. The second and third author wish to recognize their support from the Serra Hunter program. Finally, this work was supported by the Catalan agency AGAUR through its research group support program (2017SGR00227). This research is part of the R&D project IAQ4EDU, reference no. PID2020-117366RB-I00, funded by MCIN/AEI/10.13039/ 501100011033.Peer ReviewedPostprint (published version
The triangle assessment method: a new procedure for eliciting expert judgement
The Analytic Hierarchy Process (AHP) is one of the most widely used Multi-Criteria Decision-Making methods worldwide. As such, it is subject to criticisms that highlight some potential weaknesses. In this study, we present a new Multi-Criteria Decision-Making method denominated the âTriangular Assessment Methodâ (referred to by its Spanish abbreviation, MTC©). The MTC© aims to make use of the potential of AHP while avoiding some of its drawbacks. The main characteristics and advantages of the MTC© can be summarised as follows: (i) evaluation of criteria, and of the alternative options for each criterion, in trios instead of pairs; (ii) elimination of discrete scales and values involved in judgements; (iii) a substantial reduction in the number of evaluations (trios) relative to the corresponding number of pairs which would have to be considered when applying the AHP method; (iv) consistent decision-making; (v) introduction of closed cyclical series for comparing criteria and alternatives; and (vi) the introduction of opinion vectors and opinion surfaces. This new method is recommended for supporting decision-making with large numbers of subjective criteria and/or alternatives and also for group decisions where the consensus must be evaluated. The MTC© provides a different promising perspective in decision-making and could lead to new research lines in the field of information systems.This work was supported by the Galician Regional Government [âPrograma de ConsolidaciĂłn e EstructuraciĂłn de Unidades de InvestigaciĂłn Competitivas, modalidade de Grupos de Referencia Competitivaâ for the period 2006â2017] and by the European Union [ERDF program]. Likewise, the authors thank Daniele de Rigo, Dora Henriques and Cesar PĂ©rez-Cruzado, because his comments improved notably this manuscript.info:eu-repo/semantics/publishedVersio
Recommended from our members
Latent state estimation in a class of nonlinear systems
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University.The problem of estimating latent or unobserved states of a dynamical system from observed data is studied in this thesis. Approximate filtering methods for discrete time series for a class of nonlinear
systems are considered, which, in turn, require sampling from a partially specified discrete distribution. A new algorithm is proposed to sample from partially specified discrete distribution, where the specification is in terms of the first few moments of the distribution. This algorithm generates deterministic sigma points and corresponding probability weights, which match exactly a specified mean vector, a specified covariance matrix, the average of specified marginal skewness and the average of specified marginal kurtosis. Both the deterministic particles and the probability weights are given in closed form and no numerical optimization is required. This algorithm is then used in approximate Bayesian filtering for generation of particles and the associated probability weights which propagate higher order moment information about latent states. This method is extended to generate random sigma points (or particles) and corresponding probability weights that match the same moments. The
algorithm is also shown to be useful in scenario generation for financial optimization. For a variety of important distributions, the proposed moment-matching algorithm for generating particles is shown
to lead to approximation which is very close to maximum entropy approximation. In a separate, but related contribution to the field of nonlinear state estimation, a closed-form linear minimum variance filter is derived for the systems with stochastic parameter uncertainties. The expressions
for eigenvalues of the perturbed filter are derived for comparison with eigenvalues of the unperturbed Kalman filter. Moment-matching approximation is proposed for the nonlinear systems with multiplicative stochastic noise
An integrated and comprehensive fuzzy multicriteria model for supplier selection in digital supply chains
Digital supply chains (DSCs) are collaborative digital systems designed to quickly and efficiently move information, products, and services through global supply chains. The physical flow of products in traditional supply chains is replaced by the digital flow of information in DSCs. This digitalization has changed the conventional supplier selection processes. We propose an integrated and comprehensive fuzzy multicriteria model for supplier selection in DSCs. The proposed model integrates the fuzzy best-worst method (BWM) with the fuzzy multi-objective optimization based on ratio analysis plus full multiplicative form (MULTIMOORA), fuzzy complex proportional assessment of alternatives (COPRAS), and fuzzy technique for order preference by similarity to ideal solution (TOPSIS). The fuzzy BWM approach is used to measure the importance weights of the digital criteria. The fuzzy MULTIMOORA, fuzzy COPRAS, and fuzzy TOPSIS methods are used as prioritization methods to rank the suppliers. The maximize agreement heuristic (MAH) is used to aggregate the supplier rankings obtained from the prioritization methods into a consensus ranking. We present a real-world case study in a manufacturing company to demonstrate the applicability of the proposed method
Structured Semidefinite Programming for Recovering Structured Preconditioners
We develop a general framework for finding approximately-optimal
preconditioners for solving linear systems. Leveraging this framework we obtain
improved runtimes for fundamental preconditioning and linear system solving
problems including the following. We give an algorithm which, given positive
definite with
nonzero entries, computes an -optimal
diagonal preconditioner in time , where is the
optimal condition number of the rescaled matrix. We give an algorithm which,
given that is either the pseudoinverse
of a graph Laplacian matrix or a constant spectral approximation of one, solves
linear systems in in time. Our diagonal
preconditioning results improve state-of-the-art runtimes of
attained by general-purpose semidefinite programming, and our solvers improve
state-of-the-art runtimes of where is the
current matrix multiplication constant. We attain our results via new
algorithms for a class of semidefinite programs (SDPs) we call
matrix-dictionary approximation SDPs, which we leverage to solve an associated
problem we call matrix-dictionary recovery.Comment: Merge of arXiv:1812.06295 and arXiv:2008.0172
Incomplete pairwise comparative judgments: Recent developments and a proposed method
The current paper deals with incomplete Pairwise Comparisons (âPWsâ) when a large number of alternatives is evaluated. PWs are used to quantify decision maker's preferences, both ordinal and cardinal, in multi-criteria decision-making settings for eliciting the priorities of alternative options or weights of criteria. We use additive PWs with a different scale and show how 2-diagonal samples are used to deduce the implied weights thus prioritizing the alternatives. As a consequence, the number of PWs in incomplete judgment decision matrices is greatly reduced while preserving consistency and quality of the results. Computational results are provided and an example from the literature is applied to demonstrate the effectiveness of this method
Preference elicitation from pairwise comparisons in multi-criteria decision making
Decision making is an essential activity for humans and often becomes complex in the presence of uncertainty or insufficient knowledge. This research aims at estimating preferences using pairwise comparisons. A decision maker uses pairwise comparison when he/she is unable to directly assign criteria weights or scores to the available options. The judgments provided in pairwise comparisons may not always be consistent for several reasons. Experimentation has been used to obtain statistical evidence related to the widely-used consistency measures. The results highlight the need to propose new consistency measures. Two new consistency measures - termed congruence and dissonance - are proposed to aid the decision maker in the process of elicitation. Inconsistencies in pairwise comparisons are of two types i.e. cardinal and ordinal. It is shown that both cardinal and ordinal consistency can be improved with the help of these two measures. A heuristic method is then devised to detect and remove intransitive judgments. The results suggest that the devised method is feasible for improving ordinal consistency and is computationally more efficient than the optimization-based methods. There exist situations when revision of judgments is not allowed and prioritization is required without attempting to remove inconsistency. A new prioritization method has been proposed using the graph-theoretic approach. Although the performance of the proposed prioritization method was found to be comparable to other approaches, it has practical limitation in terms of computation time. As a consequence, the problem of prioritization is explored as an optimization problem. A new method based on multi-objective optimization is formulated that offers multiple non-dominated solutions and outperforms all other relevant methods for inconsistent set of judgments. A priority estimation tool (PriEsT) has been developed that implements the proposed consistency measures and prioritization methods. In order to show the benefits of PriEsT, a case study involving Telecom infrastructure selection is presented.EThOS - Electronic Theses Online ServiceGBUnited Kingdo
- âŠ