40 research outputs found

    A systematic review on multi-criteria group decision-making methods based on weights: analysis and classification scheme

    Get PDF
    Interest in group decision-making (GDM) has been increasing prominently over the last decade. Access to global databases, sophisticated sensors which can obtain multiple inputs or complex problems requiring opinions from several experts have driven interest in data aggregation. Consequently, the field has been widely studied from several viewpoints and multiple approaches have been proposed. Nevertheless, there is a lack of general framework. Moreover, this problem is exacerbated in the case of experts’ weighting methods, one of the most widely-used techniques to deal with multiple source aggregation. This lack of general classification scheme, or a guide to assist expert knowledge, leads to ambiguity or misreading for readers, who may be overwhelmed by the large amount of unclassified information currently available. To invert this situation, a general GDM framework is presented which divides and classifies all data aggregation techniques, focusing on and expanding the classification of experts’ weighting methods in terms of analysis type by carrying out an in-depth literature review. Results are not only classified but analysed and discussed regarding multiple characteristics, such as MCDMs in which they are applied, type of data used, ideal solutions considered or when they are applied. Furthermore, general requirements supplement this analysis such as initial influence, or component division considerations. As a result, this paper provides not only a general classification scheme and a detailed analysis of experts’ weighting methods but also a road map for researchers working on GDM topics or a guide for experts who use these methods. Furthermore, six significant contributions for future research pathways are provided in the conclusions.The first author acknowledges support from the Spanish Ministry of Universities [grant number FPU18/01471]. The second and third author wish to recognize their support from the Serra Hunter program. Finally, this work was supported by the Catalan agency AGAUR through its research group support program (2017SGR00227). This research is part of the R&D project IAQ4EDU, reference no. PID2020-117366RB-I00, funded by MCIN/AEI/10.13039/ 501100011033.Peer ReviewedPostprint (published version

    Uncertain Multi-Criteria Optimization Problems

    Get PDF
    Most real-world search and optimization problems naturally involve multiple criteria as objectives. Generally, symmetry, asymmetry, and anti-symmetry are basic characteristics of binary relationships used when modeling optimization problems. Moreover, the notion of symmetry has appeared in many articles about uncertainty theories that are employed in multi-criteria problems. Different solutions may produce trade-offs (conflicting scenarios) among different objectives. A better solution with respect to one objective may compromise other objectives. There are various factors that need to be considered to address the problems in multidisciplinary research, which is critical for the overall sustainability of human development and activity. In this regard, in recent decades, decision-making theory has been the subject of intense research activities due to its wide applications in different areas. The decision-making theory approach has become an important means to provide real-time solutions to uncertainty problems. Theories such as probability theory, fuzzy set theory, type-2 fuzzy set theory, rough set, and uncertainty theory, available in the existing literature, deal with such uncertainties. Nevertheless, the uncertain multi-criteria characteristics in such problems have not yet been explored in depth, and there is much left to be achieved in this direction. Hence, different mathematical models of real-life multi-criteria optimization problems can be developed in various uncertain frameworks with special emphasis on optimization problems

    A Historical Account of Types of Fuzzy Sets and Their Relationships

    Get PDF
    In this paper, we review the definition and basic properties of the different types of fuzzy sets that have appeared up to now in the literature. We also analyze the relationships between them and enumerate some of the applications in which they have been used

    An Integrated Decision-Making Method Based on Neutrosophic Numbers for Investigating Factors of Coastal Erosion

    Get PDF
    The recent boom of various integrated decision-making methods has attracted many researchers to the field. The recent integrated Analytic Network Process and Decision Making Trial and Evaluation Laboratory (ANP–DEMATEL) methods were developed based on crisp numbers and fuzzy numbers. However, these numbers are incapable of dealing with the indeterminant and inconsistent information that exists in real-life problems

    Fuzzy Techniques for Decision Making 2018

    Get PDF
    Zadeh's fuzzy set theory incorporates the impreciseness of data and evaluations, by imputting the degrees by which each object belongs to a set. Its success fostered theories that codify the subjectivity, uncertainty, imprecision, or roughness of the evaluations. Their rationale is to produce new flexible methodologies in order to model a variety of concrete decision problems more realistically. This Special Issue garners contributions addressing novel tools, techniques and methodologies for decision making (inclusive of both individual and group, single- or multi-criteria decision making) in the context of these theories. It contains 38 research articles that contribute to a variety of setups that combine fuzziness, hesitancy, roughness, covering sets, and linguistic approaches. Their ranges vary from fundamental or technical to applied approaches

    An IVIF-ELECTRE outranking method for multiple criteria decision-making with interval-valued intuitionistic fuzzy sets

    Get PDF
    The method of ELimination Et Choix Traduisant la REalité (ELimination and Choice Expressing Reality, ELECTRE) is a well-known and widely used outranking method for handling decision-making problems. The purpose of this paper is to develop an interval-valued intuitionistic fuzzy ELECTRE (IVIF-ELECTRE) method and apply it to multiple criteria decision analysis (MCDA) involving the multiple criteria evaluation/selection of alternatives. Using interval-valued intuitionistic fuzzy (IVIF) sets with an inclusion comparison approach, concordance and discordance sets are identified for each pair of alternatives. Next, concordance and discordance indices are determined using an aggregate importance weight score function and a generalised distance measurement between weighted evaluative ratings, respectively. Based on the concordance and discordance dominance matrices, two IVIF-ELECTRE ranking procedures are developed for the partial and complete ranking of the alternatives. The feasibility and applicability of the proposed methods are illustrated with a multiple criteria decision-making problem of watershed site selection. A comparative analysis of other MCDA methods is conducted to demonstrate the advantages of the proposed IVIF-ELECTRE methods. Finally, an empirical study of job choices is implemented to validate the effectiveness of the current methods in the real world. First published online: 17 Sep 201

    Algebraic Structures of Neutrosophic Triplets, Neutrosophic Duplets, or Neutrosophic Multisets

    Get PDF
    Neutrosophy (1995) is a new branch of philosophy that studies triads of the form (, , ), where is an entity {i.e. element, concept, idea, theory, logical proposition, etc.}, is the opposite of , while is the neutral (or indeterminate) between them, i.e., neither nor .Based on neutrosophy, the neutrosophic triplets were founded, which have a similar form (x, neut(x), anti(x)), that satisfy several axioms, for each element x in a given set.This collective book presents original research papers by many neutrosophic researchers from around the world, that report on the state-of-the-art and recent advancements of neutrosophic triplets, neutrosophic duplets, neutrosophic multisets and their algebraic structures – that have been defined recently in 2016 but have gained interest from world researchers. Connections between classical algebraic structures and neutrosophic triplet / duplet / multiset structures are also studied. And numerous neutrosophic applications in various fields, such as: multi-criteria decision making, image segmentation, medical diagnosis, fault diagnosis, clustering data, neutrosophic probability, human resource management, strategic planning, forecasting model, multi-granulation, supplier selection problems, typhoon disaster evaluation, skin lesson detection, mining algorithm for big data analysis, etc

    Collected Papers (on Neutrosophics, Plithogenics, Hypersoft Set, Hypergraphs, and other topics), Volume X

    Get PDF
    This tenth volume of Collected Papers includes 86 papers in English and Spanish languages comprising 972 pages, written between 2014-2022 by the author alone or in collaboration with the following 105 co-authors (alphabetically ordered) from 26 countries: Abu Sufian, Ali Hassan, Ali Safaa Sadiq, Anirudha Ghosh, Assia Bakali, Atiqe Ur Rahman, Laura Bogdan, Willem K.M. Brauers, Erick González Caballero, Fausto Cavallaro, Gavrilă Calefariu, T. Chalapathi, Victor Christianto, Mihaela Colhon, Sergiu Boris Cononovici, Mamoni Dhar, Irfan Deli, Rebeca Escobar-Jara, Alexandru Gal, N. Gandotra, Sudipta Gayen, Vassilis C. Gerogiannis, Noel Batista Hernández, Hongnian Yu, Hongbo Wang, Mihaiela Iliescu, F. Nirmala Irudayam, Sripati Jha, Darjan Karabašević, T. Katican, Bakhtawar Ali Khan, Hina Khan, Volodymyr Krasnoholovets, R. Kiran Kumar, Manoranjan Kumar Singh, Ranjan Kumar, M. Lathamaheswari, Yasar Mahmood, Nivetha Martin, Adrian Mărgean, Octavian Melinte, Mingcong Deng, Marcel Migdalovici, Monika Moga, Sana Moin, Mohamed Abdel-Basset, Mohamed Elhoseny, Rehab Mohamed, Mohamed Talea, Kalyan Mondal, Muhammad Aslam, Muhammad Aslam Malik, Muhammad Ihsan, Muhammad Naveed Jafar, Muhammad Rayees Ahmad, Muhammad Saeed, Muhammad Saqlain, Muhammad Shabir, Mujahid Abbas, Mumtaz Ali, Radu I. Munteanu, Ghulam Murtaza, Munazza Naz, Tahsin Oner, ‪Gabrijela Popović‬‬‬‬‬, Surapati Pramanik, R. Priya, S.P. Priyadharshini, Midha Qayyum, Quang-Thinh Bui, Shazia Rana, Akbara Rezaei, Jesús Estupiñán Ricardo, Rıdvan Sahin, Saeeda Mirvakili, Said Broumi, A. A. Salama, Flavius Aurelian Sârbu, Ganeshsree Selvachandran, Javid Shabbir, Shio Gai Quek, Son Hoang Le, Florentin Smarandache, Dragiša Stanujkić, S. Sudha, Taha Yasin Ozturk, Zaigham Tahir, The Houw Iong, Ayse Topal, Alptekin Ulutaș, Maikel Yelandi Leyva Vázquez, Rizha Vitania, Luige Vlădăreanu, Victor Vlădăreanu, Ștefan Vlăduțescu, J. Vimala, Dan Valeriu Voinea, Adem Yolcu, Yongfei Feng, Abd El-Nasser H. Zaied, Edmundas Kazimieras Zavadskas.‬

    EXPLAINABLE FEATURE- AND DECISION-LEVEL FUSION

    Get PDF
    Information fusion is the process of aggregating knowledge from multiple data sources to produce more consistent, accurate, and useful information than any one individual source can provide. In general, there are three primary sources of data/information: humans, algorithms, and sensors. Typically, objective data---e.g., measurements---arise from sensors. Using these data sources, applications such as computer vision and remote sensing have long been applying fusion at different levels (signal, feature, decision, etc.). Furthermore, the daily advancement in engineering technologies like smart cars, which operate in complex and dynamic environments using multiple sensors, are raising both the demand for and complexity of fusion. There is a great need to discover new theories to combine and analyze heterogeneous data arising from one or more sources. The work collected in this dissertation addresses the problem of feature- and decision-level fusion. Specifically, this work focuses on fuzzy choquet integral (ChI)-based data fusion methods. Most mathematical approaches for data fusion have focused on combining inputs relative to the assumption of independence between them. However, often there are rich interactions (e.g., correlations) between inputs that should be exploited. The ChI is a powerful aggregation tool that is capable modeling these interactions. Consider the fusion of m sources, where there are 2m unique subsets (interactions); the ChI is capable of learning the worth of each of these possible source subsets. However, the complexity of fuzzy integral-based methods grows quickly, as the number of trainable parameters for the fusion of m sources scales as 2m. Hence, we require a large amount of training data to avoid the problem of over-fitting. This work addresses the over-fitting problem of ChI-based data fusion with novel regularization strategies. These regularization strategies alleviate the issue of over-fitting while training with limited data and also enable the user to consciously push the learned methods to take a predefined, or perhaps known, structure. Also, the existing methods for training the ChI for decision- and feature-level data fusion involve quadratic programming (QP). The QP-based learning approach for learning ChI-based data fusion solutions has a high space complexity. This has limited the practical application of ChI-based data fusion methods to six or fewer input sources. To address the space complexity issue, this work introduces an online training algorithm for learning ChI. The online method is an iterative gradient descent approach that processes one observation at a time, enabling the applicability of ChI-based data fusion on higher dimensional data sets. In many real-world data fusion applications, it is imperative to have an explanation or interpretation. This may include providing information on what was learned, what is the worth of individual sources, why a decision was reached, what evidence process(es) were used, and what confidence does the system have on its decision. However, most existing machine learning solutions for data fusion are black boxes, e.g., deep learning. In this work, we designed methods and metrics that help with answering these questions of interpretation, and we also developed visualization methods that help users better understand the machine learning solution and its behavior for different instances of data
    corecore