23,571 research outputs found
Qualitative Effects of Knowledge Rules in Probabilistic Data Integration
One of the problems in data integration is data overlap: the fact that different data sources have data on the same real world entities. Much development time in data integration projects is devoted to entity resolution. Often advanced similarity measurement techniques are used to remove semantic duplicates from the integration result or solve other semantic conflicts, but it proofs impossible to get rid of all semantic problems in data integration. An often-used rule of thumb states that about 90% of the development effort is devoted to solving the remaining 10% hard cases. In an attempt to significantly decrease human effort at data integration time, we have proposed an approach that stores any remaining semantic uncertainty and conflicts in a probabilistic database enabling it to already be meaningfully used. The main development effort in our approach is devoted to defining and tuning knowledge rules and thresholds. Rules and thresholds directly impact the size and quality of the integration result. We measure integration quality indirectly by measuring the quality of answers to queries on the integrated data set in an information retrieval-like way. The main contribution of this report is an experimental investigation of the effects and sensitivity of rule definition and threshold tuning on the integration quality. This proves that our approach indeed reduces development effort â and not merely shifts the effort to rule definition and threshold tuning â by showing that setting rough safe thresholds and defining only a few rules suffices to produce a âgood enoughâ integration that can be meaningfully used
CBR and MBR techniques: review for an application in the emergencies domain
The purpose of this document is to provide an in-depth analysis of current reasoning engine practice and the integration strategies of Case Based Reasoning and Model Based Reasoning that will be used in the design and development of the RIMSAT system.
RIMSAT (Remote Intelligent Management Support and Training) is a European Commission funded project designed to:
a.. Provide an innovative, 'intelligent', knowledge based solution aimed at improving the quality of critical decisions
b.. Enhance the competencies and responsiveness of individuals and organisations involved in highly complex, safety critical incidents - irrespective of their location.
In other words, RIMSAT aims to design and implement a decision support system that using Case Base Reasoning as well as Model Base Reasoning technology is applied in the management of emergency situations.
This document is part of a deliverable for RIMSAT project, and although it has been done in close contact with the requirements of the project, it provides an overview wide enough for providing a state of the art in integration strategies between CBR and MBR technologies.Postprint (published version
Interactive Problem Structuring with ICZM Stakeholders
Integrated Coastal Zone Management (ICZM) is struggling with a lack of science-management integration. Many computer systems, usually known as âdecision support systemsâ, have been developed with the intention to make scientific knowledge about complex systems more accessible for coastal managers. These tools, allowing a multi-disciplinary approach with multi-criteria analyses, are designed for well-defined, structured problems. However, in practice stakeholder consensus on the problem structure is usually lacking. Aim of this paper is to explore the practical opportunities for the new so-called Quasta approach to structure complex problems in a group setting. This approach is based on a combination of Cognitive Mapping and Qualitative Probabilistic Networks. It comprehends a new type of computer system which is quite simple and flexible as well. The tool is tested in two workshops in which various coastal management issues were discussed. Evaluations of these workshops show that (1) this system helps stakeholders to make them aware of causal relationships, (2) it is useful for a qualitative exploration of scenarios, (3) it identifies the quantitative knowledge gaps of the problem being discussed and (4) the threshold for non technicians to use this tool is quite low.Integrated Coastal Zone Management, Problem Structuring, Stakeholder Participation, Cognitive Mapping, Interactive Policy Making
Handling uncertainty in information extraction
This position paper proposes an interactive approach for developing information extractors based on the ontology definition process with knowledge about possible (in)correctness of annotations. We discuss the problem of managing and manipulating probabilistic dependencies
Potentials and Limits of Bayesian Networks to Deal with Uncertainty in the Assessment of Climate Change Adaptation Policies
Bayesian networks (BNs) have been increasingly applied to support management and decision-making processes under conditions of environmental variability and uncertainty, providing logical and holistic reasoning in complex systems since they succinctly and effectively translate causal assertions between variables into patterns of probabilistic dependence. Through a theoretical assessment of the features and the statistical rationale of BNs, and a review of specific applications to ecological modelling, natural resource management, and climate change policy issues, the present paper analyses the effectiveness of the BN model as a synthesis framework, which would allow the user to manage the uncertainty characterising the definition and implementation of climate change adaptation policies. The review will let emerge the potentials of the model to characterise, incorporate and communicate the uncertainty, with the aim to provide an efficient support to an informed and transparent decision making process. The possible drawbacks arising from the implementation of BNs are also analysed, providing potential solutions to overcome them.Adaptation to Climate Change, Bayesian Network, Uncertainty
Recommended from our members
Explanatory debugging: Supporting end-user debugging of machine-learned programs
Many machine-learning algorithms learn rules of behavior from individual end users, such as task-oriented desktop organizers and handwriting recognizers. These rules form a âprogramâ that tells the computer what to do when future inputs arrive. Little research has explored how an end user can debug these programs when they make mistakes. We present our progress toward enabling end users to debug these learned programs via a Natural Programming methodology. We began with a formative study exploring how users reason about and correct a text-classification program. From the results, we derived and prototyped a concept based on âexplanatory debuggingâ, then empirically evaluated it. Our results contribute methods for exposing a learned program's logic to end users and for eliciting user corrections to improve the program's predictions
Expert Elicitation for Reliable System Design
This paper reviews the role of expert judgement to support reliability
assessments within the systems engineering design process. Generic design
processes are described to give the context and a discussion is given about the
nature of the reliability assessments required in the different systems
engineering phases. It is argued that, as far as meeting reliability
requirements is concerned, the whole design process is more akin to a
statistical control process than to a straightforward statistical problem of
assessing an unknown distribution. This leads to features of the expert
judgement problem in the design context which are substantially different from
those seen, for example, in risk assessment. In particular, the role of experts
in problem structuring and in developing failure mitigation options is much
more prominent, and there is a need to take into account the reliability
potential for future mitigation measures downstream in the system life cycle.
An overview is given of the stakeholders typically involved in large scale
systems engineering design projects, and this is used to argue the need for
methods that expose potential judgemental biases in order to generate analyses
that can be said to provide rational consensus about uncertainties. Finally, a
number of key points are developed with the aim of moving toward a framework
that provides a holistic method for tracking reliability assessment through the
design process.Comment: This paper commented in: [arXiv:0708.0285], [arXiv:0708.0287],
[arXiv:0708.0288]. Rejoinder in [arXiv:0708.0293]. Published at
http://dx.doi.org/10.1214/088342306000000510 in the Statistical Science
(http://www.imstat.org/sts/) by the Institute of Mathematical Statistics
(http://www.imstat.org
Use of COTS functional analysis software as an IVHM design tool for detection and isolation of UAV fuel system faults
This paper presents a new approach to the development of health management solutions which can be applied to both new and legacy platforms during the conceptual design phase. The approach involves the qualitative functional modelling of a system in order to perform an Integrated Vehicle Health Management (IVHM) design â the placement of sensors and the diagnostic rules to be used in interrogating their output. The qualitative functional analysis was chosen as a route for early assessment of failures in complex systems. Functional models of system components are required for capturing the available system knowledge used during various stages of system and IVHM design. MADeâą (Maintenance Aware Design environment), a COTS software tool developed by PHM Technology, was used for the health management design. A model has been built incorporating the failure diagrams of five failure modes for five different components of a UAV fuel system. Thus an inherent health management solution for the system and the optimised sensor set solution have been defined. The automatically generated sensor set solution also contains a diagnostic rule set, which was validated on the fuel rig for different operation modes taking into account the predicted fault detection/isolation and ambiguity group coefficients. It was concluded that when using functional modelling, the IVHM design and the actual system design cannot be done in isolation. The functional approach requires permanent input from the system designer and reliability engineers in order to construct a functional model that will qualitatively represent the real system. In other words, the physical insight should not be isolated from the failure phenomena and the diagnostic analysis tools should be able to adequately capture the experience bases. This approach has been verified on a laboratory bench top test rig which can simulate a range of possible fuel system faults. The rig is fully instrumented in order to allow benchmarking of various sensing solution for fault detection/isolation that were identified using functional analysis
- âŠ