29 research outputs found

    Brain Tumor Diagnosis Support System: A decision Fusion Framework

    Get PDF
    An important factor in providing effective and efficient therapy for brain tumors is early and accurate detection, which can increase survival rates. Current image-based tumor detection and diagnosis techniques are heavily dependent on interpretation by neuro-specialists and/or radiologists, making the evaluation process time-consuming and prone to human error and subjectivity. Besides, widespread use of MR spectroscopy requires specialized processing and assessment of the data and obvious and fast show of the results as photos or maps for routine medical interpretative of an exam. Automatic brain tumor detection and classification have the potential to offer greater efficiency and predictions that are more accurate. However, the performance accuracy of automatic detection and classification techniques tends to be dependent on the specific image modality and is well known to vary from technique to technique. For this reason, it would be prudent to examine the variations in the execution of these methods to obtain consistently high levels of achievement accuracy. Designing, implementing, and evaluating categorization software is the goal of the suggested framework for discerning various brain tumor types on magnetic resonance imaging (MRI) using textural features. This thesis introduces a brain tumor detection support system that involves the use of a variety of tumor classifiers. The system is designed as a decision fusion framework that enables these multi-classifier to analyze medical images, such as those obtained from magnetic resonance imaging (MRI). The fusion procedure is ground on the Dempster-Shafer evidence fusion theory. Numerous experimental scenarios have been implemented to validate the efficiency of the proposed framework. Compared with alternative approaches, the outcomes show that the methodology developed in this thesis demonstrates higher accuracy and higher computational efficiency

    A Theory of Factfinding: The Logic for Processing Evidence

    Get PDF
    Academics have never agreed on a theory of proof. The darkest corner of anyone’s theory has concerned how legal decisionmakers logically should find facts. This Article pries open that cognitive black box. It does so by employing multivalent logic, which enables it to overcome the traditional probability problems that impeded all prior attempts. The result is the first-ever exposure of the proper logic for finding a fact or a case’s facts. The focus will be on the evidential processing phase, rather than the application of the standard of proof as tracked in my prior work. Processing evidence involves (1) reasoning inferentially from a piece of evidence to a degree of belief and of disbelief in the element to be proved, (2) aggregating pieces of evidence that all bear to some degree on one element in order to form a composite degree of belief and of disbelief in the element, and (3) considering the series of elemental beliefs and disbeliefs to reach a decision. Zeroing in, the factfinder in step #1 should connect each item of evidence to an element to be proved by constructing a chain of inferences, employing multivalent logic’s usual rules for conjunction and disjunction to form a belief function that reflects the belief and the disbelief in the element and also the uncommitted belief reflecting uncertainty. The factfinder in step #2 should aggregate, by weighted arithmetic averaging, the belief functions resulting from all the items of evidence that bear on any one element, creating a composite belief function for the element. The factfinder in step #3 does not need to combine elements, but instead should directly move to testing whether the degree of belief from each element’s composite belief function sufficiently exceeds the corresponding degree of disbelief. In sum, the factfinder should construct a chain of inferences to produce a belief function for each item of evidence bearing on an element, and then average them to produce for each element a composite belief function ready for the element-by-element standard of proof. This Article performs the task of mapping normatively how to reason from legal evidence to a decision on facts. More significantly, it constitutes a further demonstration of how embedded the multivalent-belief model is in our law

    A generic framework for context-dependent fusion with application to landmine detection.

    Get PDF
    For complex detection and classification problems, involving data with large intra-class variations and noisy inputs, no single source of information can provide a satisfactory solution. As a result, combination of multiple classifiers is playing an increasing role in solving these complex pattern recognition problems, and has proven to be a viable alternative to using a single classifier. Over the past few years, a variety of schemes have been proposed for combining multiple classifiers. Most of these were global as they assign a degree of worthiness to each classifier, that is averaged over the entire training data. This may not be the optimal way to combine the different experts since the behavior of each one may not be uniform over the different regions of the feature space. To overcome this issue, few local methods have been proposed in the last few years. Local fusion methods aim to adapt the classifiers\u27 worthiness to different regions of the feature space. First, they partition the input samples. Then, they identify the best classifier for each partition and designate it as the expert for that partition. Unfortunately, current local methods are either computationally expensive and/or perform these two tasks independently of each other. However, feature space partition and algorithm selection are not independent and their optimization should be simultaneous. In this dissertation, we introduce a new local fusion approach, called Context Extraction for Local Fusion (CELF). CELF was designed to adapt the fusion to different regions of the feature space. It takes advantage of the strength of the different experts and overcome their limitations. First, we describe the baseline CELF algorithm. We formulate a novel objective function that combines context identification and multi-algorithm fusion criteria into a joint objective function. The context identification component thrives to partition the input feature space into different clusters (called contexts), while the fusion component thrives to learn the optimal fusion parameters within each cluster. Second, we propose several variations of CELF to deal with different applications scenario. In particular, we propose an extension that includes a feature discrimination component (CELF-FD). This version is advantageous when dealing with high dimensional feature spaces and/or when the number of features extracted by the individual algorithms varies significantly. CELF-CA is another extension of CELF that adds a regularization term to the objective function to introduce competition among the clusters and to find the optimal number of clusters in an unsupervised way. CELF-CA starts by partitioning the data into a large number of small clusters. As the algorithm progresses, adjacent clusters compete for data points, and clusters that lose the competition gradually become depleted and vanish. Third, we propose CELF-M that generalizes CELF to support multiple classes data sets. The baseline CELF and its extensions were formulated to use linear aggregation to combine the output of the different algorithms within each context. For some applications, this can be too restrictive and non-linear fusion may be needed. To address this potential drawback, we propose two other variations of CELF that use non-linear aggregation. The first one is based on Neural Networks (CELF-NN) and the second one is based on Fuzzy Integrals (CELF-FI). The latter one has the desirable property of assigning weights to subsets of classifiers to take into account the interaction between them. To test a new signature using CELF (or its variants), each algorithm would extract its set of features and assigns a confidence value. Then, the features are used to identify the best context, and the fusion parameters of this context are used to fuse the individual confidence values. For each variation of CELF, we formulate an objective function, derive the necessary conditions to optimize it, and construct an iterative algorithm. Then we use examples to illustrate the behavior of the algorithm, compare it to global fusion, and highlight its advantages. We apply our proposed fusion methods to the problem of landmine detection. We use data collected using Ground Penetration Radar (GPR) and Wideband Electro -Magnetic Induction (WEMI) sensors. We show that CELF (and its variants) can identify meaningful and coherent contexts (e.g. mines of same type, mines buried at the same site, etc.) and that different expert algorithms can be identified for the different contexts. In addition to the land mine detection application, we apply our approaches to semantic video indexing, image database categorization, and phoneme recognition. In all applications, we compare the performance of CELF with standard fusion methods, and show that our approach outperforms all these methods

    Abduction, Explanation and Relevance Feedback

    Get PDF
    Selecting good query terms to represent an information need is difficult. The complexity of verbalising an information need can increase when the need is vague, when the document collection is unfamiliar or when the searcher is inexperienced with information retrieval (IR) systems. It is much easier, however, for a user to assess which documents contain relevant information. Relevance feedback (RF) techniques make use of this fact to automatically modify a query representation based on the documents a user considers relevant. RF has proved to be relatively successful at increasing the effectiveness of retrieval systems in certain types of search, and RF techniques have gradually appeared in operational systems and even some Web engines. However, the traditional approaches to RF do not consider the behavioural aspects of information seeking. The standard RF algorithms consider only what documents the user has marked as relevant; they do not consider how the user has assessed relevance. For RF to become an effective support to information seeking it is imperative to develop new models of RF that are capable of incorporating how users make relevance assessments. In this thesis I view RF as a process of explanation. A RF theory should provide an explanation of why a document is relevant to an information need. Such an explanation can be based on how information is used within documents. I use abductive inference to provide a framework for an explanation-based account of RF. Abductive inference is specifically designed as a technique for generating explanations of complex events, and has been widely used in a range of diagnostic systems. Such a framework is capable of producing a set of possible explanations for why a user marked a number of documents relevant at the current search iteration. The choice of which explanation to use is guided by information on how the user has interacted with the system---how many documents they have marked relevant, where in the document ranking the relevant documents occur and the relevance score given to a document by the user. This behavioural information is used to create explanations and to choose which type of explanation is required in the search. The explanation is then used as the basis of a modified query to be submitted to the system. I also investigate how the notion of explanation can be used at the interface to encourage more use of RF by searchers

    Data mining for decision support with uncertainty on the airplane

    Get PDF
    This study describes the formalization of the medical decision-making process under uncertainty underpinned by conditional preferences, the theory of evidence and the exploitation of high-utility patterns in data mining. To assist a decision maker, the medical process (clinical pathway) was implemented using a Conditional Preferences Base (CPB). Then for knowledge engineering, a Dempster-Shafer ontology integrating uncertainty underpinned by evidence theory was built. Beliefs from different sources are established with the use of data mining. The result is recorded in an In-flight Electronic Health Records (IEHR). The IEHR contains evidential items corresponding to the variables determining the management of medical incidents. Finally, to manage tolerance to uncertainty, a belief fusion algorithm was developed. There is an inherent risk in the practice of medicine that can affect the conditions of medical activities (diagnostic or therapeutic purposes). The management of uncertainty is also an integral part of decision-making processes in the medical field. Different models of medical decisions under uncertainty have been proposed. Much of the current literature on these models pays particular attention to health economics inspired by how to manage uncertainty in economic decisions. However, these models fail to consider the purely medical aspect of the decision that always remains poorly characterized. Besides, the models achieving interesting decision outcomes are those considering the patient's health variable and other variables such as the costs associated with the care services. These models are aimed at defining health policy (health economics) without a deep consideration for the uncertainty surrounding the medical practices and associated technologies. Our approach is to integrate the management of uncertainty into clinical reasoning models such as Clinical Pathway and to exploit the relationships between the determinants of incident management using data mining tools. To this end, how healthcare professionals see and conceive uncertainty has been investigated. This allowed for the identification of the characteristics determining people under uncertainty and to understand the different forms and representations of uncertainty. Furthermore, what an in-flight medical incident is and how its management is a decision under uncertainty issues was defined. This is the first phase of common data mining that will provide an evidential transaction basis. Subsequently an evidential and ontological rea-soning to manage this uncertainty has been established in order to support decision making processes on the airplane

    Evidence Fusion using D-S Theory: utilizing a progressively evolving reliability factor in wireless networks

    Get PDF
    The Dempster-Shafer (D-S) theory provides a method to combine evidence from multiple nodes to estimate the likelihood of an intrusion. The theory\u27s rule of combination gives a numerical method to fuse multiple pieces of information to derive a conclusion. But, D-S theory has its shortcomings when used in situations where evidence has significant conflict. Though the observers may have different values of uncertainty in the observed data, D-S theory considers the observers to be equally trustworthy. This thesis introduces a new method of combination based on D-S theory and Consensus method, that takes into consideration the reliability of evidence used in data fusion. The new method\u27s results have been compared against three other methods of evidence fusion to objectively analyze how they perform under Denial of Service attacks and Xmas tree scan attacks

    A Cooperative Approach for Composite Ontology Matching

    Get PDF
    Ontologies have proven to be an essential element in a range of applications in which knowl-edge plays a key role. Resolving the semantic heterogeneity problem is crucial to allow the interoperability between ontology-based systems. This makes automatic ontology matching, as an anticipated solution to semantic heterogeneity, an important, research issue. Many dif-ferent approaches to the matching problem have emerged from the literature. An important issue of ontology matching is to find effective ways of choosing among many techniques and their variations, and then combining their results. An innovative and promising option is to formalize the combination of matching techniques using agent-based approaches, such as cooperative negotiation and argumentation. In this thesis, the formalization of the on-tology matching problem following an agent-based approach is proposed. Such proposal is evaluated using state-of-the-art data sets. The results show that the consensus obtained by negotiation and argumentation represent intermediary values which are closer to the best matcher. As the best matcher may vary depending on specific differences of multiple data sets, cooperative approaches are an advantage. *** RESUMO - Ontologias são elementos essenciais em sistemas baseados em conhecimento. Resolver o problema de heterogeneidade semântica é fundamental para permitira interoperabilidade entre sistemas baseados em ontologias. Mapeamento automático de ontologias pode ser visto como uma solução para esse problema. Diferentes e complementares abordagens para o problema são propostas na literatura. Um aspecto importante em mapeamento consiste em selecionar o conjunto adequado de abordagens e suas variações, e então combinar seus resultados. Uma opção promissora envolve formalizara combinação de técnicas de ma-peamento usando abordagens baseadas em agentes cooperativos, tais como negociação e argumentação. Nesta tese, a formalização do problema de combinação de técnicas de ma-peamento usando tais abordagens é proposta e avaliada. A avaliação, que envolve conjuntos de testes sugeridos pela comunidade científica, permite concluir que o consenso obtido pela negociação e pela argumentação não é exatamente a melhoria de todos os resultados individuais, mas representa os valores intermediários que são próximo da melhor técnica. Considerando que a melhor técnica pode variar dependendo de diferencas específicas de múltiplas bases de dados, abordagens cooperativas são uma vantagem

    Semantic and stylistic differences between Yahweh and Elohim in the Hebrew Bible

    Get PDF
    This thesis attempts to understand the authorial and editorial choice between the two most common designations for God in the Hebrew Bible: Yahweh and Elohim. The main body of the thesis divides into four sections, the first two parts containing the background and methodological material against which the second two are to be read.Part one deals with the major methodological issues relevant to the thesis. It examines previous academic debate relating to the divine names (=DNs), especially the works of Cassuto and Segal, the documentary hypothesis, the Rabbinic tradition, and Dahse's preference for the Septuagint. It outlines the approach taken here (synchronic, based on the MT), and justifies this as being the most appropriate for this particular taskPart two is also preliminary in character, giving a brief but comprehensive account of the meanings and uses of three designations (Elohim, Adonai Yahweh, Yahweh Elohim) throughout the Hebrew Bible, so that their significance (or lack of significance) will be recognized when they appear in parts three and four.Part three gives a quantitative account of DN usage in two corpora - Psalms and Wisdom Literature. This reveals a number of facets of DN choice: suitability to genre, arrangement of sections, poetic sequence, and in the case of the Elohistic Psalter, editorial change. A possible reason for this editorial change is offered in an appendixPart four consists of a series of qualitative analyses of texts which display a high degree of DN variability (including Exodus 1-6, Jonah). It is argued in each case that DN variation is a literary device intended to highlight certain aspects of the text. Examination of a prophetic text (Amos) reveals possible structural reasons for the placement of Yahweh and other designations. As the criteria for DN use are different in each text examined, it is suggested that the significance of each DN is dependent on, and limited to the text in which it is found.This thesis does not conclude with a single (or even several) satisfying answer(s) to the question of the interchange between Yahweh and Elohim, as Cassuto and Segal attempted to do. Instead, it points to the kind of answers which are relevant: from use in stock phrases and quotations, to bespoke commentaries on the text. Is also demonstrates the wide variety of DN patterns and predilections which we must recognize as 'normal'

    Acta Cybernetica : Volume 19. Number 1.

    Get PDF
    corecore