34 research outputs found

    A method of classification for multisource data in remote sensing based on interval-valued probabilities

    Get PDF
    An axiomatic approach to intervalued (IV) probabilities is presented, where the IV probability is defined by a pair of set-theoretic functions which satisfy some pre-specified axioms. On the basis of this approach representation of statistical evidence and combination of multiple bodies of evidence are emphasized. Although IV probabilities provide an innovative means for the representation and combination of evidential information, they make the decision process rather complicated. It entails more intelligent strategies for making decisions. The development of decision rules over IV probabilities is discussed from the viewpoint of statistical pattern recognition. The proposed method, so called evidential reasoning method, is applied to the ground-cover classification of a multisource data set consisting of Multispectral Scanner (MSS) data, Synthetic Aperture Radar (SAR) data, and digital terrain data such as elevation, slope, and aspect. By treating the data sources separately, the method is able to capture both parametric and nonparametric information and to combine them. Then the method is applied to two separate cases of classifying multiband data obtained by a single sensor. In each case a set of multiple sources is obtained by dividing the dimensionally huge data into smaller and more manageable pieces based on the global statistical correlation information. By a divide-and-combine process, the method is able to utilize more features than the conventional maximum likelihood method

    Dynamics of Inductive Inference in a Unified Framework

    Get PDF
    We present a model of inductive inference that includes, as special cases, Bayesian reasoning, case-based reasoning, and rule-based reasoning. This unified framework allows us to examine, positively or normatively, how the various modes of inductive inference can be combined and how their relative weights change endogenously. We establish conditions under which an agent who does not know the structure of the data generating process will decrease, over the course of her reasoning, the weight of credence put on Bayesian vs. non-Bayesian reasoning. We show that even random data can make certain theories seem plausible and hence increase the weight of rule-based vs. case-based reasoning, leading the agent in some cases to cycle between being rule-based and case-based. We identify conditions under which minmax regret criteria will not be effective.Induction, Bayesian updating, Case-Based Reasoning, Inference

    Method of Classification for Multisource Data in Remote Sensing Based on Interval-VaIued Probabilities

    Get PDF
    This work was supported by NASA Grant No. NAGW-925 “Earth Observation Research - Using Multistage EOS-Iike Data” (Principal lnvestigators: David A. Landgrebe and Chris Johannsen). The Anderson River SAR/MSS data set was acquired, preprocessed, and loaned to us by the Canada Centre for Remote Sensing, Department of Energy Mines, and Resources, of the Government of Canada. The importance of utilizing multisource data in ground-cover^ classification lies in the fact that improvements in classification accuracy can be achieved at the expense of additional independent features provided by separate sensors. However, it should be recognized that information and knowledge from most available data sources in the real world are neither certain nor complete. We refer to such a body of uncertain, incomplete, and sometimes inconsistent information as “evidential information.” The objective of this research is to develop a mathematical framework within which various applications can be made with multisource data in remote sensing and geographic information systems. The methodology described in this report has evolved from “evidential reasoning,” where each data source is considered as providing a body of evidence with a certain degree of belief. The degrees of belief based on the body of evidence are represented by “interval-valued (IV) probabilities” rather than by conventional point-valued probabilities so that uncertainty can be embedded in the measures. There are three fundamental problems in the muItisource data analysis based on IV probabilities: (1) how to represent bodies of evidence by IV probabilities, (2) how to combine IV probabilities to give an overall assessment of the combined body of evidence, and (3) how to make a decision when the statistical evidence is given by IV probabilities. This report first introduces an axiomatic approach to IV probabilities, where the IV probability is defined by a pair of set-theoretic functions which satisfy some pre-specified axioms. On the basis of this approach the report focuses on representation of statistical evidence by IV probabilities and combination of multiple bodies of evidence. Although IV probabilities provide an innovative means for the representation and combination of evidential information, they make the decision process rather complicated. It entails more intelligent strategies for making decisions. This report also focuses on the development of decision rules over IV probabilities from the viewpoint of statistical pattern recognition The proposed method, so called “evidential reasoning” method, is applied to the ground-cover classification of a multisource data set consisting of Multispectral Scanner (MSS) data* Synthetic Aperture Radar (SAR) data, and digital terrain data such as elevation, slope, and aspect. By treating the data sources separately, the method is able to capture both parametric and nonparametric information and to combine them. Then the method is applied to two separate cases of classifying multiband data obtained by a single sensor, in each case, a set of multiple sources is obtained by dividing the dimensionally huge data into smaller and more manageable pieces based on the global statistical correlation information. By a Divide-and-Combine process, the method is able to utilize more features than the conventional Maximum Likelihood method

    Advances and Applications of Dezert-Smarandache Theory (DSmT) for Information Fusion (Collected Works), Vol. 4

    Get PDF
    The fourth volume on Advances and Applications of Dezert-Smarandache Theory (DSmT) for information fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics. The contributions (see List of Articles published in this book, at the end of the volume) have been published or presented after disseminating the third volume (2009, http://fs.unm.edu/DSmT-book3.pdf) in international conferences, seminars, workshops and journals. First Part of this book presents the theoretical advancement of DSmT, dealing with Belief functions, conditioning and deconditioning, Analytic Hierarchy Process, Decision Making, Multi-Criteria, evidence theory, combination rule, evidence distance, conflicting belief, sources of evidences with different importance and reliabilities, importance of sources, pignistic probability transformation, Qualitative reasoning under uncertainty, Imprecise belief structures, 2-Tuple linguistic label, Electre Tri Method, hierarchical proportional redistribution, basic belief assignment, subjective probability measure, Smarandache codification, neutrosophic logic, Evidence theory, outranking methods, Dempster-Shafer Theory, Bayes fusion rule, frequentist probability, mean square error, controlling factor, optimal assignment solution, data association, Transferable Belief Model, and others. More applications of DSmT have emerged in the past years since the apparition of the third book of DSmT 2009. Subsequently, the second part of this volume is about applications of DSmT in correlation with Electronic Support Measures, belief function, sensor networks, Ground Moving Target and Multiple target tracking, Vehicle-Born Improvised Explosive Device, Belief Interacting Multiple Model filter, seismic and acoustic sensor, Support Vector Machines, Alarm classification, ability of human visual system, Uncertainty Representation and Reasoning Evaluation Framework, Threat Assessment, Handwritten Signature Verification, Automatic Aircraft Recognition, Dynamic Data-Driven Application System, adjustment of secure communication trust analysis, and so on. Finally, the third part presents a List of References related with DSmT published or presented along the years since its inception in 2004, chronologically ordered

    A generic framework for context-dependent fusion with application to landmine detection.

    Get PDF
    For complex detection and classification problems, involving data with large intra-class variations and noisy inputs, no single source of information can provide a satisfactory solution. As a result, combination of multiple classifiers is playing an increasing role in solving these complex pattern recognition problems, and has proven to be a viable alternative to using a single classifier. Over the past few years, a variety of schemes have been proposed for combining multiple classifiers. Most of these were global as they assign a degree of worthiness to each classifier, that is averaged over the entire training data. This may not be the optimal way to combine the different experts since the behavior of each one may not be uniform over the different regions of the feature space. To overcome this issue, few local methods have been proposed in the last few years. Local fusion methods aim to adapt the classifiers\u27 worthiness to different regions of the feature space. First, they partition the input samples. Then, they identify the best classifier for each partition and designate it as the expert for that partition. Unfortunately, current local methods are either computationally expensive and/or perform these two tasks independently of each other. However, feature space partition and algorithm selection are not independent and their optimization should be simultaneous. In this dissertation, we introduce a new local fusion approach, called Context Extraction for Local Fusion (CELF). CELF was designed to adapt the fusion to different regions of the feature space. It takes advantage of the strength of the different experts and overcome their limitations. First, we describe the baseline CELF algorithm. We formulate a novel objective function that combines context identification and multi-algorithm fusion criteria into a joint objective function. The context identification component thrives to partition the input feature space into different clusters (called contexts), while the fusion component thrives to learn the optimal fusion parameters within each cluster. Second, we propose several variations of CELF to deal with different applications scenario. In particular, we propose an extension that includes a feature discrimination component (CELF-FD). This version is advantageous when dealing with high dimensional feature spaces and/or when the number of features extracted by the individual algorithms varies significantly. CELF-CA is another extension of CELF that adds a regularization term to the objective function to introduce competition among the clusters and to find the optimal number of clusters in an unsupervised way. CELF-CA starts by partitioning the data into a large number of small clusters. As the algorithm progresses, adjacent clusters compete for data points, and clusters that lose the competition gradually become depleted and vanish. Third, we propose CELF-M that generalizes CELF to support multiple classes data sets. The baseline CELF and its extensions were formulated to use linear aggregation to combine the output of the different algorithms within each context. For some applications, this can be too restrictive and non-linear fusion may be needed. To address this potential drawback, we propose two other variations of CELF that use non-linear aggregation. The first one is based on Neural Networks (CELF-NN) and the second one is based on Fuzzy Integrals (CELF-FI). The latter one has the desirable property of assigning weights to subsets of classifiers to take into account the interaction between them. To test a new signature using CELF (or its variants), each algorithm would extract its set of features and assigns a confidence value. Then, the features are used to identify the best context, and the fusion parameters of this context are used to fuse the individual confidence values. For each variation of CELF, we formulate an objective function, derive the necessary conditions to optimize it, and construct an iterative algorithm. Then we use examples to illustrate the behavior of the algorithm, compare it to global fusion, and highlight its advantages. We apply our proposed fusion methods to the problem of landmine detection. We use data collected using Ground Penetration Radar (GPR) and Wideband Electro -Magnetic Induction (WEMI) sensors. We show that CELF (and its variants) can identify meaningful and coherent contexts (e.g. mines of same type, mines buried at the same site, etc.) and that different expert algorithms can be identified for the different contexts. In addition to the land mine detection application, we apply our approaches to semantic video indexing, image database categorization, and phoneme recognition. In all applications, we compare the performance of CELF with standard fusion methods, and show that our approach outperforms all these methods

    Brain Tumor Diagnosis Support System: A decision Fusion Framework

    Get PDF
    An important factor in providing effective and efficient therapy for brain tumors is early and accurate detection, which can increase survival rates. Current image-based tumor detection and diagnosis techniques are heavily dependent on interpretation by neuro-specialists and/or radiologists, making the evaluation process time-consuming and prone to human error and subjectivity. Besides, widespread use of MR spectroscopy requires specialized processing and assessment of the data and obvious and fast show of the results as photos or maps for routine medical interpretative of an exam. Automatic brain tumor detection and classification have the potential to offer greater efficiency and predictions that are more accurate. However, the performance accuracy of automatic detection and classification techniques tends to be dependent on the specific image modality and is well known to vary from technique to technique. For this reason, it would be prudent to examine the variations in the execution of these methods to obtain consistently high levels of achievement accuracy. Designing, implementing, and evaluating categorization software is the goal of the suggested framework for discerning various brain tumor types on magnetic resonance imaging (MRI) using textural features. This thesis introduces a brain tumor detection support system that involves the use of a variety of tumor classifiers. The system is designed as a decision fusion framework that enables these multi-classifier to analyze medical images, such as those obtained from magnetic resonance imaging (MRI). The fusion procedure is ground on the Dempster-Shafer evidence fusion theory. Numerous experimental scenarios have been implemented to validate the efficiency of the proposed framework. Compared with alternative approaches, the outcomes show that the methodology developed in this thesis demonstrates higher accuracy and higher computational efficiency

    Multispace & Multistructure. Neutrosophic Transdisciplinarity (100 Collected Papers of Sciences), Vol. IV

    Get PDF
    The fourth volume, in my book series of “Collected Papers”, includes 100 published and unpublished articles, notes, (preliminary) drafts containing just ideas to be further investigated, scientific souvenirs, scientific blogs, project proposals, small experiments, solved and unsolved problems and conjectures, updated or alternative versions of previous papers, short or long humanistic essays, letters to the editors - all collected in the previous three decades (1980-2010) – but most of them are from the last decade (2000-2010), some of them being lost and found, yet others are extended, diversified, improved versions. This is an eclectic tome of 800 pages with papers in various fields of sciences, alphabetically listed, such as: astronomy, biology, calculus, chemistry, computer programming codification, economics and business and politics, education and administration, game theory, geometry, graph theory, information fusion, neutrosophic logic and set, non-Euclidean geometry, number theory, paradoxes, philosophy of science, psychology, quantum physics, scientific research methods, and statistics. It was my preoccupation and collaboration as author, co-author, translator, or cotranslator, and editor with many scientists from around the world for long time. Many topics from this book are incipient and need to be expanded in future explorations

    Auditing Symposium IX: Proceedings of the 1988 Touche Ross/University of Kansas Symposium on Auditing Problems

    Get PDF
    Auditor evidential planning judgments / Arnold Wright, Theodore J. Mock; Discussant\u27s response to Auditor evidential planning judgments / Robert H. Temkin; Relative importance of auditing to the accounting profession: Is auditing a profit center? / Norman R. Walker, Michael D. Doll; Using and evaluating audit decision aids / Robert H. Ashton, John J. Willingham; Discussant\u27s response to The relative importance of auditing to the accounting profession: Is auditing a profit center? / Zoe-Vonna Palmrose; Accounting standards and professional ethics / Arthur R. Wyatt; Discussant\u27s response to Using and evaluating audit decision aids / Stephen J. Aldersley; Audit theory paradigms / Jack C. Robertson; Discussant\u27s response to Audit theory paradigms / Donald L. Neebes; Why the auditing standards on evaluating internal control needed to be replaced / Jerry D. Sullivan; Discussant\u27s response to Why the auditing standards on evaluating internal control needed to be replaced / William R. Kinney; AUDITOR\u27S ASSISTANT: A knowledge engineering tool for audit decisions / Glenn Shafer, Prakash P. Shenoy, Rajendra P. Srivastava; Discussant\u27s response to AUDITOR\u27S ASSISTANT: A knowledge engineering tool for audit decisions / John B. Sullivan; Reports on the application of accounting Principles -- A Review of SAS 50 / James A. Johnson; Discussant\u27s response to Reports on the application of accounting Principles -- A Review of SAS 50 / Gary L. Holstrumhttps://egrove.olemiss.edu/dl_proceedings/1008/thumbnail.jp

    A general cognitive framework for context-aware systems: extensions and applications for high level information fusion approaches

    Get PDF
    Mención Internacional en el título de doctorContext-aware systems aims at the development of computational systems that process data acquired from different datasources and adapt their behaviour in order to provide the 'right' information, at the 'right' time, in the 'right' place, in the 'right' way to the 'right' person (Fischer, 2012). Traditionally computational research has tried to answer these needs by means of low-level algorithms. In the last years the combination of numeric and symbolic approaches has offered the opportunity to create systems to deal with these issues. However, although the performance of algorithms and the quality of the data directly provided by computers and devices has quickly improved, symbolic models used to represent the resulting knowledge have not yet been adapted to smart environments. This lack of representation does not allow to take advantage of the semantic quality of the information provided by new sensors. This dissertation proposes a set of extensions and applications focused on a cognitive framework for the implementation of context-aware systems based on a general model inspired by the Information Fusion paradigm. This model is stepped in several abstraction levels from low-level raw data to high level scene interpretation whose structure is determined by a set of ontologies. Each ontology level provides a skeleton that includes general concepts and relations to describe entities and their connections. This structure has been designed to promote extensibility and modularity, and might be refined to apply this model in specific domains. This framework combines a priori context knowledge represented with ontologies with real data coming from sensors to support logic-based high-level interpretation of the current situation and to automatically generate feedback recommendations to adjust data acquisition procedures. This work advocates for the introduction of general purpose cognitive layers in order to obtain a closer representation to the human cognition, generate additional knowledge and improve the high-level interpretation. Extensibility and adaptability of the basic ontology levels is demonstrated with the introduction of these traverse semantic layers which are able to be present and represent information at several granularity levels of knowledge using a common formalism. Context-based system must be able to reason about uncertainty. However the reasoning associated to ontologies has been limited to classical description logic mechanisms. This research also tackle the problem of reasoning under uncertainty circumstances through a logic-based paradigm for abductive reasoning: the Belief-Argumentation System. The main contribution of this dissertation is the adaptation of the general architecture and the theoretical proposals to several context-aware application areas such as Ambient Intelligence, Social Signal Processing and surveillance systems. The implementation of prototypes and examples for these areas are explained along this dissertation to progressively illustrate the improvements and extensions in the framework. To initially depict the general model, its components and the basic reasoning mechanisms a video-based Ambient Intelligence application is presented. The advantages and features of the framework extensions through traverse cognitive layers are demonstrated in a Social Signal Processing case for the elaboration of automatic market researches. Finally, the functioning of the system under uncertainty circumstances is illustrated with several examples to support decision makers in the detection of potential threats in common harbor scenarios.Programa Oficial de Doctorado en Ciencia y Tecnología InformáticaPresidente: José Manuel Molina López.- Secretario: Ángel Arroyo.- Vocal: Nayat Sánchez P
    corecore