13 research outputs found

    A novel method based on extended uncertain 2-tuple linguistic muirhead mean operators to MAGDM under uncertain 2-tuple linguistic environment

    Get PDF
    The present work is focused on multi-attribute group decision-making (MAGDM) problems with the uncertain 2-tuple linguistic information (ULI2–tuple) based on new aggregation operators which can capture interrelationships of attributes by a parameter vector P. To begin with, we present some new uncertain 2-tuple linguistic MM aggregation (UL2–tuple-MM) operators to handle MAGDM problems with ULI2–tuple, including the uncertain 2-tuple linguistic Muirhead mean (UL2–tuple-MM) operator, uncertain 2-tuple linguistic weighted Muirhead mean (UL2–tuple-WMM) operator. In addition, we extend UL2–tuple-WMM operator to a new aggregation operator named extended uncertain 2-tuple linguistic weighted Muirhead mean (EUL2–tuple-WMM) operators in order to handle some decision-making problems with ULI2–tuple whose attribute values are expressed in ULI2–tuple and attribute weights are also 2-tuple linguistic information. Whilst, the some properties of these new aggregation operators are obtained and some special cases are discussed. Moreover, we propose a new method to solve the MAGDM problems with ULI2–tuple. Finally, a numerical example is given to show the validity of the proposed method and the advantages of proposed method are also analysed

    EXPLAINABLE FEATURE- AND DECISION-LEVEL FUSION

    Get PDF
    Information fusion is the process of aggregating knowledge from multiple data sources to produce more consistent, accurate, and useful information than any one individual source can provide. In general, there are three primary sources of data/information: humans, algorithms, and sensors. Typically, objective data---e.g., measurements---arise from sensors. Using these data sources, applications such as computer vision and remote sensing have long been applying fusion at different levels (signal, feature, decision, etc.). Furthermore, the daily advancement in engineering technologies like smart cars, which operate in complex and dynamic environments using multiple sensors, are raising both the demand for and complexity of fusion. There is a great need to discover new theories to combine and analyze heterogeneous data arising from one or more sources. The work collected in this dissertation addresses the problem of feature- and decision-level fusion. Specifically, this work focuses on fuzzy choquet integral (ChI)-based data fusion methods. Most mathematical approaches for data fusion have focused on combining inputs relative to the assumption of independence between them. However, often there are rich interactions (e.g., correlations) between inputs that should be exploited. The ChI is a powerful aggregation tool that is capable modeling these interactions. Consider the fusion of m sources, where there are 2m unique subsets (interactions); the ChI is capable of learning the worth of each of these possible source subsets. However, the complexity of fuzzy integral-based methods grows quickly, as the number of trainable parameters for the fusion of m sources scales as 2m. Hence, we require a large amount of training data to avoid the problem of over-fitting. This work addresses the over-fitting problem of ChI-based data fusion with novel regularization strategies. These regularization strategies alleviate the issue of over-fitting while training with limited data and also enable the user to consciously push the learned methods to take a predefined, or perhaps known, structure. Also, the existing methods for training the ChI for decision- and feature-level data fusion involve quadratic programming (QP). The QP-based learning approach for learning ChI-based data fusion solutions has a high space complexity. This has limited the practical application of ChI-based data fusion methods to six or fewer input sources. To address the space complexity issue, this work introduces an online training algorithm for learning ChI. The online method is an iterative gradient descent approach that processes one observation at a time, enabling the applicability of ChI-based data fusion on higher dimensional data sets. In many real-world data fusion applications, it is imperative to have an explanation or interpretation. This may include providing information on what was learned, what is the worth of individual sources, why a decision was reached, what evidence process(es) were used, and what confidence does the system have on its decision. However, most existing machine learning solutions for data fusion are black boxes, e.g., deep learning. In this work, we designed methods and metrics that help with answering these questions of interpretation, and we also developed visualization methods that help users better understand the machine learning solution and its behavior for different instances of data

    Algebraic Structures of Neutrosophic Triplets, Neutrosophic Duplets, or Neutrosophic Multisets

    Get PDF
    Neutrosophy (1995) is a new branch of philosophy that studies triads of the form (, , ), where is an entity {i.e. element, concept, idea, theory, logical proposition, etc.}, is the opposite of , while is the neutral (or indeterminate) between them, i.e., neither nor .Based on neutrosophy, the neutrosophic triplets were founded, which have a similar form (x, neut(x), anti(x)), that satisfy several axioms, for each element x in a given set.This collective book presents original research papers by many neutrosophic researchers from around the world, that report on the state-of-the-art and recent advancements of neutrosophic triplets, neutrosophic duplets, neutrosophic multisets and their algebraic structures – that have been defined recently in 2016 but have gained interest from world researchers. Connections between classical algebraic structures and neutrosophic triplet / duplet / multiset structures are also studied. And numerous neutrosophic applications in various fields, such as: multi-criteria decision making, image segmentation, medical diagnosis, fault diagnosis, clustering data, neutrosophic probability, human resource management, strategic planning, forecasting model, multi-granulation, supplier selection problems, typhoon disaster evaluation, skin lesson detection, mining algorithm for big data analysis, etc

    Evaluation of the coordination between China’s technology and economy using a grey multivariate coupling model

    Get PDF
    As extremely complex interactions exist in the process of economic research and development, a novel grey multivariable coupling model called CFGM(1,N) is proposed to evaluate the coordination degree between China’s technology and economy with limited information. This proposed model improves the aggregation in GM(1,N) model through the Choquet integral among λ-fuzzy measure, which can reflect interactions among factor indexes. Meanwhile, it can estimate the coordinate parameters via the whale optimization algorithm and obtains the coupling coordination degree combining with grey comentropy. To verify the proposed model, a case study using a dataset from China’s technology and the economic system is conducted. The CFGM(1,N) model has a better performance in the convergence and interpretability, as compared to the three heuristic algorithm and two classical approaches. Our finding suggests that China’s technology and the economic system is still relatively coordinated. Results also reveal that there exists strong negative cooperation between the comprehensive human input and the comprehensive capital investment in this system. First published online 19 November 202
    corecore