35 research outputs found

    Knowledge representation and diagnostic inference using Bayesian networks in the medical discourse

    Full text link
    For the diagnostic inference under uncertainty Bayesian networks are investigated. The method is based on an adequate uniform representation of the necessary knowledge. This includes both generic and experience-based specific knowledge, which is stored in a knowledge base. For knowledge processing, a combination of the problem-solving methods of concept-based and case-based reasoning is used. Concept-based reasoning is used for the diagnosis, therapy and medication recommendation and evaluation of generic knowledge. Exceptions in the form of specific patient cases are processed by case-based reasoning. In addition, the use of Bayesian networks allows to deal with uncertainty, fuzziness and incompleteness. Thus, the valid general concepts can be issued according to their probability. To this end, various inference mechanisms are introduced and subsequently evaluated within the context of a developed prototype. Tests are employed to assess the classification of diagnoses by the network

    Method for the semantic indexing of concept hierarchies, uniform representation, use of relational database systems and generic and case-based reasoning

    Full text link
    This paper presents a method for semantic indexing and describes its application in the field of knowledge representation. Starting point of the semantic indexing is the knowledge represented by concept hierarchies. The goal is to assign keys to nodes (concepts) that are hierarchically ordered and syntactically and semantically correct. With the indexing algorithm, keys are computed such that concepts are partially unifiable with all more specific concepts and only semantically correct concepts are allowed to be added. The keys represent terminological relationships. Correctness and completeness of the underlying indexing algorithm are proven. The use of classical relational databases for the storage of instances is described. Because of the uniform representation, inference can be done using case-based reasoning and generic problem solving methods

    Architecture of an Intelligent Test Error Detection Agent

    Get PDF
    In this paper we present the architecture of an intelligent test error detection agent that is able to independently supervise the test process. By means of rationally applied bin and cause specific retests it should detect and correct the majority of test errors with minimal additional test effort. To achieve this, the agent utilizes test error models learned from historical example data to rate single wafer runs. The resulting run specific test error hypotheses are sequentially combined with information gained from regular and ordered retests in order to infer and update a global test error hypothesis. Based on this global hypothesis the agent decides if a test error exists, what its most probable cause is and which bins are affected. Consequently, it is able to initiate proper retests to check the inferred hypothesis and if necessary correct the affected test runs. The paper includes a description of the general architecture and discussions about possible test error models, the inference approach to generate the test error hypotheses from the given information and a possible set of rules to act upon the inferred hypothesis

    Approaching Concept Drift by Context Feature Partitioning

    Get PDF
    In this paper we present a new approach to handle concept drift using domain-specific knowledge. More precisely, we capitalize known context features to partition a domain into subdomains featuring static class distributions. Subsequently, we learn separate classifiers for each sub domain and classify new instances accordingly. To determine the optimal partitioning for a domain we apply a search algorithm aiming to maximize the resulting accuracy. In practical domains like fault detection concept drift often occurs in combination with imbalances data. As this issue gets more important learning models on smaller subdomains we additionally use sampling methods to handle it. Comparative experiments with artificial data sets showed that our approach outperforms a plain SVM regarding different performance measures. Summarized, the partitioning concept drift approach (PCD) is a possible way to handle concept drift in domains where the causing context features are at least partly known

    Approaching Concept Drift by Context Feature Partitioning

    Get PDF
    In this paper we present a new approach to handle concept drift using domain-specific knowledge. More precisely, we capitalize known context features to partition a domain into subdomains featuring static class distributions. Subsequently, we learn separate classifiers for each sub domain and classify new instances accordingly. To determine the optimal partitioning for a domain we apply a search algorithm aiming to maximize the resulting accuracy. In practical domains like fault detection concept drift often occurs in combination with imbalances data. As this issue gets more important learning models on smaller subdomains we additionally use sampling methods to handle it. Comparative experiments with artificial data sets showed that our approach outperforms a plain SVM regarding different performance measures. Summarized, the partitioning concept drift approach (PCD) is a possible way to handle concept drift in domains where the causing context features are at least partly known

    Quantification and Classification of Cortical Perfusion during Ischemic Strokes by Intraoperative Thermal Imaging

    Get PDF
    Thermal imaging is a non-invasive and marker-free approach for intraoperative measurements of small temperature variations. In this work, we demonstrate the abilities of active dynamic thermal imaging for analysis of tissue perfusion state in case of cerebral ischemia. For this purpose, a NaCl irrigation is applied to the exposed cortex during hemicraniectomy. The cortical temperature changes are measured by a thermal imaging system and the thermal signal is recognized by a novel machine learning framework. Subsequent tissue heating is then approximated by a double exponential function to estimate tissue temperature decay constants. These constants allow us to characterize tissue with respect to its dynamic thermal properties. Using a Gaussian mixture model we show the correlation of these estimated parameters with infarct demarcations of post-operative CT. This novel scheme yields a standardized representation of cortical thermodynamic properties and might guide further research regarding specific intraoperative diagnostics

    WissensreprÀsentation und diagnostische Inferenz mittels Bayesscher Netze im medizinischen Diskursbereich

    Get PDF
    Für die diagnostische Inferenz unter Unsicherheit werden Bayessche Netze untersucht. Grundlage dafür bildet eine adĂ€quate einheitliche ReprĂ€sentation des notwendigen Wissens. Dies ist sowohl generisches als auch auf Erfahrungen beruhendes spezifisches Wissen, welches in einer Wissensbasis gespeichert wird. Zur Wissensverarbeitung wird eine Kombination der Problemlösungsmethoden des Concept Based und Case Based Reasoning eingesetzt. Concept Based Reasoning wird für die Diagnose-, Therapie- und Medikationsempfehlung und -evaluierung über generischesWissen eingesetzt. SonderfĂ€lle in Form von spezifischen PatientenfĂ€llen werden durch das Case Based Reasoning verarbeitet. Darüber hinaus erlaubt der Einsatz von Bayesschen Netze den Umgang mit Unsicherheit, UnschĂ€rfe und UnvollstĂ€ndigkeit. Es können so die gültigen allgemeinen Konzepte nach derenWahrscheinlichkeit ausgegeben werden. Dazu werden verschiedene Inferenzmechanismen vorgestellt und anschließend im Rahmen der Entwicklung eines Prototypen evaluiert. Mit Hilfe von Tests wird die Klassifizierung von Diagnosen durch das Netz bewertet.:1 Einleitung 2 ReprĂ€sentation und Inferenz 3 Inferenzmechanismen 4 Prototypische Softwarearchitektur 5 Evaluation 6 Zusammenfassun

    Putting ABox Updates into Action

    Get PDF
    When trying to apply recently developed approaches for updating Description Logic ABoxes in the context of an action programming language, one encounters two problems. First, updates generate so-called Boolean ABoxes, which cannot be handled by traditional Description Logic reasoners. Second, iterated update operations result in very large Boolean ABoxes, which, however, contain a huge amount of redundant information. In this paper, we address both issues from a practical point of view

    A divide and conquer strategy for the maximum likelihood localization of low intensity objects

    Get PDF
    In cell biology and other fields the automatic accurate localization of sub-resolution objects in images is an important tool. The signal is often corrupted by multiple forms of noise, including excess noise resulting from the amplification by an electron multiplying charge-coupled device (EMCCD). Here we present our novel Nested Maximum Likelihood Algorithm (NMLA), which solves the problem of localizing multiple overlapping emitters in a setting affected by excess noise, by repeatedly solving the task of independent localization for single emitters in an excess noise-free system. NMLA dramatically improves scalability and robustness, when compared to a general purpose optimization technique. Our method was successfully applied for in vivo localization of fluorescent proteins
    corecore