395,133 research outputs found

    Method of Classification for Multisource Data in Remote Sensing Based on Interval-VaIued Probabilities

    Get PDF
    This work was supported by NASA Grant No. NAGW-925 “Earth Observation Research - Using Multistage EOS-Iike Data” (Principal lnvestigators: David A. Landgrebe and Chris Johannsen). The Anderson River SAR/MSS data set was acquired, preprocessed, and loaned to us by the Canada Centre for Remote Sensing, Department of Energy Mines, and Resources, of the Government of Canada. The importance of utilizing multisource data in ground-cover^ classification lies in the fact that improvements in classification accuracy can be achieved at the expense of additional independent features provided by separate sensors. However, it should be recognized that information and knowledge from most available data sources in the real world are neither certain nor complete. We refer to such a body of uncertain, incomplete, and sometimes inconsistent information as “evidential information.” The objective of this research is to develop a mathematical framework within which various applications can be made with multisource data in remote sensing and geographic information systems. The methodology described in this report has evolved from “evidential reasoning,” where each data source is considered as providing a body of evidence with a certain degree of belief. The degrees of belief based on the body of evidence are represented by “interval-valued (IV) probabilities” rather than by conventional point-valued probabilities so that uncertainty can be embedded in the measures. There are three fundamental problems in the muItisource data analysis based on IV probabilities: (1) how to represent bodies of evidence by IV probabilities, (2) how to combine IV probabilities to give an overall assessment of the combined body of evidence, and (3) how to make a decision when the statistical evidence is given by IV probabilities. This report first introduces an axiomatic approach to IV probabilities, where the IV probability is defined by a pair of set-theoretic functions which satisfy some pre-specified axioms. On the basis of this approach the report focuses on representation of statistical evidence by IV probabilities and combination of multiple bodies of evidence. Although IV probabilities provide an innovative means for the representation and combination of evidential information, they make the decision process rather complicated. It entails more intelligent strategies for making decisions. This report also focuses on the development of decision rules over IV probabilities from the viewpoint of statistical pattern recognition The proposed method, so called “evidential reasoning” method, is applied to the ground-cover classification of a multisource data set consisting of Multispectral Scanner (MSS) data* Synthetic Aperture Radar (SAR) data, and digital terrain data such as elevation, slope, and aspect. By treating the data sources separately, the method is able to capture both parametric and nonparametric information and to combine them. Then the method is applied to two separate cases of classifying multiband data obtained by a single sensor, in each case, a set of multiple sources is obtained by dividing the dimensionally huge data into smaller and more manageable pieces based on the global statistical correlation information. By a Divide-and-Combine process, the method is able to utilize more features than the conventional Maximum Likelihood method

    A method of classification for multisource data in remote sensing based on interval-valued probabilities

    Get PDF
    An axiomatic approach to intervalued (IV) probabilities is presented, where the IV probability is defined by a pair of set-theoretic functions which satisfy some pre-specified axioms. On the basis of this approach representation of statistical evidence and combination of multiple bodies of evidence are emphasized. Although IV probabilities provide an innovative means for the representation and combination of evidential information, they make the decision process rather complicated. It entails more intelligent strategies for making decisions. The development of decision rules over IV probabilities is discussed from the viewpoint of statistical pattern recognition. The proposed method, so called evidential reasoning method, is applied to the ground-cover classification of a multisource data set consisting of Multispectral Scanner (MSS) data, Synthetic Aperture Radar (SAR) data, and digital terrain data such as elevation, slope, and aspect. By treating the data sources separately, the method is able to capture both parametric and nonparametric information and to combine them. Then the method is applied to two separate cases of classifying multiband data obtained by a single sensor. In each case a set of multiple sources is obtained by dividing the dimensionally huge data into smaller and more manageable pieces based on the global statistical correlation information. By a divide-and-combine process, the method is able to utilize more features than the conventional maximum likelihood method

    Preference fusion and Condorcet's Paradox under uncertainty

    Get PDF
    Facing an unknown situation, a person may not be able to firmly elicit his/her preferences over different alternatives, so he/she tends to express uncertain preferences. Given a community of different persons expressing their preferences over certain alternatives under uncertainty, to get a collective representative opinion of the whole community, a preference fusion process is required. The aim of this work is to propose a preference fusion method that copes with uncertainty and escape from the Condorcet paradox. To model preferences under uncertainty, we propose to develop a model of preferences based on belief function theory that accurately describes and captures the uncertainty associated with individual or collective preferences. This work improves and extends the previous results. This work improves and extends the contribution presented in a previous work. The benefits of our contribution are twofold. On the one hand, we propose a qualitative and expressive preference modeling strategy based on belief-function theory which scales better with the number of sources. On the other hand, we propose an incremental distance-based algorithm (using Jousselme distance) for the construction of the collective preference order to avoid the Condorcet Paradox.Comment: International Conference on Information Fusion, Jul 2017, Xi'an, Chin

    Fuzzy Interval-Valued Multi Criteria Based Decision Making for Ranking Features in Multi-Modal 3D Face Recognition

    Get PDF
    Soodamani Ramalingam, 'Fuzzy interval-valued multi criteria based decision making for ranking features in multi-modal 3D face recognition', Fuzzy Sets and Systems, In Press version available online 13 June 2017. This is an Open Access paper, made available under the Creative Commons license CC BY 4.0 https://creativecommons.org/licenses/by/4.0/This paper describes an application of multi-criteria decision making (MCDM) for multi-modal fusion of features in a 3D face recognition system. A decision making process is outlined that is based on the performance of multi-modal features in a face recognition task involving a set of 3D face databases. In particular, the fuzzy interval valued MCDM technique called TOPSIS is applied for ranking and deciding on the best choice of multi-modal features at the decision stage. It provides a formal mechanism of benchmarking their performances against a set of criteria. The technique demonstrates its ability in scaling up the multi-modal features.Peer reviewedProo

    Building Program Vector Representations for Deep Learning

    Full text link
    Deep learning has made significant breakthroughs in various fields of artificial intelligence. Advantages of deep learning include the ability to capture highly complicated features, weak involvement of human engineering, etc. However, it is still virtually impossible to use deep learning to analyze programs since deep architectures cannot be trained effectively with pure back propagation. In this pioneering paper, we propose the "coding criterion" to build program vector representations, which are the premise of deep learning for program analysis. Our representation learning approach directly makes deep learning a reality in this new field. We evaluate the learned vector representations both qualitatively and quantitatively. We conclude, based on the experiments, the coding criterion is successful in building program representations. To evaluate whether deep learning is beneficial for program analysis, we feed the representations to deep neural networks, and achieve higher accuracy in the program classification task than "shallow" methods, such as logistic regression and the support vector machine. This result confirms the feasibility of deep learning to analyze programs. It also gives primary evidence of its success in this new field. We believe deep learning will become an outstanding technique for program analysis in the near future.Comment: This paper was submitted to ICSE'1

    A method for classification of multisource data using interval-valued probabilities and its application to HIRIS data

    Get PDF
    A method of classifying multisource data in remote sensing is presented. The proposed method considers each data source as an information source providing a body of evidence, represents statistical evidence by interval-valued probabilities, and uses Dempster's rule to integrate information based on multiple data source. The method is applied to the problems of ground-cover classification of multispectral data combined with digital terrain data such as elevation, slope, and aspect. Then this method is applied to simulated 201-band High Resolution Imaging Spectrometer (HIRIS) data by dividing the dimensionally huge data source into smaller and more manageable pieces based on the global statistical correlation information. It produces higher classification accuracy than the Maximum Likelihood (ML) classification method when the Hughes phenomenon is apparent

    Specification and implementation of mapping rule visualization and editing : MapVOWL and the RMLEditor

    Get PDF
    Visual tools are implemented to help users in defining how to generate Linked Data from raw data. This is possible thanks to mapping languages which enable detaching mapping rules from the implementation that executes them. However, no thorough research has been conducted so far on how to visualize such mapping rules, especially if they become large and require considering multiple heterogeneous raw data sources and transformed data values. In the past, we proposed the RMLEditor, a visual graph-based user interface, which allows users to easily create mapping rules for generating Linked Data from raw data. In this paper, we build on top of our existing work: we (i) specify a visual notation for graph visualizations used to represent mapping rules, (ii) introduce an approach for manipulating rules when large visualizations emerge, and (iii) propose an approach to uniformly visualize data fraction of raw data sources combined with an interactive interface for uniform data fraction transformations. We perform two additional comparative user studies. The first one compares the use of the visual notation to present mapping rules to the use of a mapping language directly, which reveals that the visual notation is preferred. The second one compares the use of the graph-based RMLEditor for creating mapping rules to the form-based RMLx Visual Editor, which reveals that graph-based visualizations are preferred to create mapping rules through the use of our proposed visual notation and uniform representation of heterogeneous data sources and data values. (C) 2018 Elsevier B.V. All rights reserved

    Reliable Uncertain Evidence Modeling in Bayesian Networks by Credal Networks

    Full text link
    A reliable modeling of uncertain evidence in Bayesian networks based on a set-valued quantification is proposed. Both soft and virtual evidences are considered. We show that evidence propagation in this setup can be reduced to standard updating in an augmented credal network, equivalent to a set of consistent Bayesian networks. A characterization of the computational complexity for this task is derived together with an efficient exact procedure for a subclass of instances. In the case of multiple uncertain evidences over the same variable, the proposed procedure can provide a set-valued version of the geometric approach to opinion pooling.Comment: 19 page
    • …
    corecore