1,444 research outputs found

    A method of classification for multisource data in remote sensing based on interval-valued probabilities

    Get PDF
    An axiomatic approach to intervalued (IV) probabilities is presented, where the IV probability is defined by a pair of set-theoretic functions which satisfy some pre-specified axioms. On the basis of this approach representation of statistical evidence and combination of multiple bodies of evidence are emphasized. Although IV probabilities provide an innovative means for the representation and combination of evidential information, they make the decision process rather complicated. It entails more intelligent strategies for making decisions. The development of decision rules over IV probabilities is discussed from the viewpoint of statistical pattern recognition. The proposed method, so called evidential reasoning method, is applied to the ground-cover classification of a multisource data set consisting of Multispectral Scanner (MSS) data, Synthetic Aperture Radar (SAR) data, and digital terrain data such as elevation, slope, and aspect. By treating the data sources separately, the method is able to capture both parametric and nonparametric information and to combine them. Then the method is applied to two separate cases of classifying multiband data obtained by a single sensor. In each case a set of multiple sources is obtained by dividing the dimensionally huge data into smaller and more manageable pieces based on the global statistical correlation information. By a divide-and-combine process, the method is able to utilize more features than the conventional maximum likelihood method

    A systematic review on multi-criteria group decision-making methods based on weights: analysis and classification scheme

    Get PDF
    Interest in group decision-making (GDM) has been increasing prominently over the last decade. Access to global databases, sophisticated sensors which can obtain multiple inputs or complex problems requiring opinions from several experts have driven interest in data aggregation. Consequently, the field has been widely studied from several viewpoints and multiple approaches have been proposed. Nevertheless, there is a lack of general framework. Moreover, this problem is exacerbated in the case of experts’ weighting methods, one of the most widely-used techniques to deal with multiple source aggregation. This lack of general classification scheme, or a guide to assist expert knowledge, leads to ambiguity or misreading for readers, who may be overwhelmed by the large amount of unclassified information currently available. To invert this situation, a general GDM framework is presented which divides and classifies all data aggregation techniques, focusing on and expanding the classification of experts’ weighting methods in terms of analysis type by carrying out an in-depth literature review. Results are not only classified but analysed and discussed regarding multiple characteristics, such as MCDMs in which they are applied, type of data used, ideal solutions considered or when they are applied. Furthermore, general requirements supplement this analysis such as initial influence, or component division considerations. As a result, this paper provides not only a general classification scheme and a detailed analysis of experts’ weighting methods but also a road map for researchers working on GDM topics or a guide for experts who use these methods. Furthermore, six significant contributions for future research pathways are provided in the conclusions.The first author acknowledges support from the Spanish Ministry of Universities [grant number FPU18/01471]. The second and third author wish to recognize their support from the Serra Hunter program. Finally, this work was supported by the Catalan agency AGAUR through its research group support program (2017SGR00227). This research is part of the R&D project IAQ4EDU, reference no. PID2020-117366RB-I00, funded by MCIN/AEI/10.13039/ 501100011033.Peer ReviewedPostprint (published version

    Distributed Linguistic Representations in Decision Making: Taxonomy, Key Elements and Applications, and Challenges in Data Science and Explainable Artificial Intelligence

    Get PDF
    Distributed linguistic representations are powerful tools for modelling the uncertainty and complexity of preference information in linguistic decision making. To provide a comprehensive perspective on the development of distributed linguistic representations in decision making, we present the taxonomy of existing distributed linguistic representations. Then, we review the key elements and applications of distributed linguistic information processing in decision making, including the distance measurement, aggregation methods, distributed linguistic preference relations, and distributed linguistic multiple attribute decision making models. Next, we provide a discussion on ongoing challenges and future research directions from the perspective of data science and explainable artificial intelligence.National Natural Science Foundation of China (NSFC) 71971039 71421001,71910107002,71771037,71874023 71871149Sichuan University sksyl201705 2018hhs-5

    Combination of Evidence in Dempster-Shafer Theory

    Full text link

    Method of Classification for Multisource Data in Remote Sensing Based on Interval-VaIued Probabilities

    Get PDF
    This work was supported by NASA Grant No. NAGW-925 “Earth Observation Research - Using Multistage EOS-Iike Data” (Principal lnvestigators: David A. Landgrebe and Chris Johannsen). The Anderson River SAR/MSS data set was acquired, preprocessed, and loaned to us by the Canada Centre for Remote Sensing, Department of Energy Mines, and Resources, of the Government of Canada. The importance of utilizing multisource data in ground-cover^ classification lies in the fact that improvements in classification accuracy can be achieved at the expense of additional independent features provided by separate sensors. However, it should be recognized that information and knowledge from most available data sources in the real world are neither certain nor complete. We refer to such a body of uncertain, incomplete, and sometimes inconsistent information as “evidential information.” The objective of this research is to develop a mathematical framework within which various applications can be made with multisource data in remote sensing and geographic information systems. The methodology described in this report has evolved from “evidential reasoning,” where each data source is considered as providing a body of evidence with a certain degree of belief. The degrees of belief based on the body of evidence are represented by “interval-valued (IV) probabilities” rather than by conventional point-valued probabilities so that uncertainty can be embedded in the measures. There are three fundamental problems in the muItisource data analysis based on IV probabilities: (1) how to represent bodies of evidence by IV probabilities, (2) how to combine IV probabilities to give an overall assessment of the combined body of evidence, and (3) how to make a decision when the statistical evidence is given by IV probabilities. This report first introduces an axiomatic approach to IV probabilities, where the IV probability is defined by a pair of set-theoretic functions which satisfy some pre-specified axioms. On the basis of this approach the report focuses on representation of statistical evidence by IV probabilities and combination of multiple bodies of evidence. Although IV probabilities provide an innovative means for the representation and combination of evidential information, they make the decision process rather complicated. It entails more intelligent strategies for making decisions. This report also focuses on the development of decision rules over IV probabilities from the viewpoint of statistical pattern recognition The proposed method, so called “evidential reasoning” method, is applied to the ground-cover classification of a multisource data set consisting of Multispectral Scanner (MSS) data* Synthetic Aperture Radar (SAR) data, and digital terrain data such as elevation, slope, and aspect. By treating the data sources separately, the method is able to capture both parametric and nonparametric information and to combine them. Then the method is applied to two separate cases of classifying multiband data obtained by a single sensor, in each case, a set of multiple sources is obtained by dividing the dimensionally huge data into smaller and more manageable pieces based on the global statistical correlation information. By a Divide-and-Combine process, the method is able to utilize more features than the conventional Maximum Likelihood method

    A cardinal dissensus measure based on the Mahalanobis distance

    Get PDF
    Producción CientíficaIn this paper we address the problem of measuring the degree of consensus/dissensus in a context where experts or agents express their opinions on alternatives or issues by means of cardinal evaluations. To this end we propose a new class of distance-based consensus model, the family of the Mahalanobis dissensus measures for profiles of cardinal values. We set forth some meaningful properties of the Mahalanobis dissensus measures. Finally, an application over a real empirical example is presented and discussed.Ministerio de Economía, Industria y Competitividad (Project CGL2008-06003-C03-03/CLI)Ministerio de Economía, Industria y Competitividad (Project ECO2012-32178)Ministerio de Economía, Industria y Competitividad (Project ECO2012-31933
    corecore