8,968 research outputs found

    Semantic inconsistency measures using 3-valued logics

    Get PDF
    AI systems often need to deal with inconsistencies. One way of getting information about inconsistencies is by measuring the amount of information in the knowledgebase. In the past 20 years numerous inconsistency measures have been proposed. Many of these measures are syntactic measures, that is, they are based in some way on the minimal inconsistent subsets of the knowledgebase. Very little attention has been given to semantic inconsistency measures, that is, ones that are based on the models of the knowledgebase where the notion of a model is generalized to allow an atom to be assigned a truth value that denotes contradiction. In fact, only one nontrivial semantic inconsistency measure, the contension measure, has been in wide use. The purpose of this paper is to define a class of semantic inconsistency measures based on 3-valued logics. First, we show which 3-valued logics are useful for this purpose. Then we show that the class of semantic inconsistency measures can be developed using a graphical framework similar to the way that syntactic inconsistency measures have been studied. We give several examples of semantic inconsistency measures and show how they apply to three useful 3-valued logics. We also investigate the properties of these inconsistency measures and show their computation for several knowledgebases

    Requirements Problem and Solution Concepts for Adaptive Systems Engineering, and their Relationship to Mathematical Optimisation, Decision Analysis, and Expected Utility Theory

    Full text link
    Requirements Engineering (RE) focuses on eliciting, modelling, and analyzing the requirements and environment of a system-to-be in order to design its specification. The design of the specification, usually called the Requirements Problem (RP), is a complex problem solving task, as it involves, for each new system-to-be, the discovery and exploration of, and decision making in, new and ill-defined problem and solution spaces. The default RP in RE is to design a specification of the system-to-be which (i) is consistent with given requirements and conditions of its environment, and (ii) together with environment conditions satisfies requirements. This paper (i) shows that the Requirements Problem for Adaptive Systems (RPAS) is different from, and is not a subclass of the default RP, (ii) gives a formal definition of RPAS, and (iii) discusses implications for future research

    Interval Neutrosophic Sets and Logic: Theory and Applications in Computing

    Get PDF
    A neutrosophic set is a part of neutrosophy that studies the origin, nature, and scope of neutralities, as well as their interactions with different ideational spectra. The neutrosophic set is a powerful general formal framework that has been recently proposed. However, the neutrosophic set needs to be specified from a technical point of view. Here, we define the set-theoretic operators on an instance of a neutrosophic set, and call it an Interval Neutrosophic Set (INS). We prove various properties of INS, which are connected to operations and relations over INS. We also introduce a new logic system based on interval neutrosophic sets. We study the interval neutrosophic propositional calculus and interval neutrosophic predicate calculus. We also create a neutrosophic logic inference system based on interval neutrosophic logic. Under the framework of the interval neutrosophic set, we propose a data model based on the special case of the interval neutrosophic sets called Neutrosophic Data Model. This data model is the extension of fuzzy data model and paraconsistent data model. We generalize the set-theoretic operators and relation-theoretic operators of fuzzy relations and paraconsistent relations to neutrosophic relations. We propose the generalized SQL query constructs and tuple-relational calculus for Neutrosophic Data Model. We also design an architecture of Semantic Web Services agent based on the interval neutrosophic logic and do the simulation study

    Early aspects: aspect-oriented requirements engineering and architecture design

    Get PDF
    This paper reports on the third Early Aspects: Aspect-Oriented Requirements Engineering and Architecture Design Workshop, which has been held in Lancaster, UK, on March 21, 2004. The workshop included a presentation session and working sessions in which the particular topics on early aspects were discussed. The primary goal of the workshop was to focus on challenges to defining methodical software development processes for aspects from early on in the software life cycle and explore the potential of proposed methods and techniques to scale up to industrial applications

    Strict finitism as a foundation for mathematics

    Get PDF
    The principal focus of this research is a comprehensive defence of the theory of strict finitism as a foundation for mathematics. I have three broad aims in the thesis; firstly, to offer as complete and developed account of the theory of strict finitism as it has been described and discussed in the literature. I detail the commitments and claims of the theory, and discuss the best ways in which to present the theory. Secondly, I consider the main objections to strict finitism, in particular a number of claims that have been made to the effect that strict finitism is, as it stands, incoherent. Many of these claims I reject, but one, which focuses on the problematic notion of vagueness to which the strict finites seems committed, I suggest, calls for some revision or further development of the strict finitist’s position. The third part of this thesis is therefore concerned with such development, and I discuss various options for strict finitism, ranging from the development of a trivalent semantic, to a rejection of the commitment to vagueness in the first instance

    Fusing Automatically Extracted Annotations for the Semantic Web

    Get PDF
    This research focuses on the problem of semantic data fusion. Although various solutions have been developed in the research communities focusing on databases and formal logic, the choice of an appropriate algorithm is non-trivial because the performance of each algorithm and its optimal configuration parameters depend on the type of data, to which the algorithm is applied. In order to be reusable, the fusion system must be able to select appropriate techniques and use them in combination. Moreover, because of the varying reliability of data sources and algorithms performing fusion subtasks, uncertainty is an inherent feature of semantically annotated data and has to be taken into account by the fusion system. Finally, the issue of schema heterogeneity can have a negative impact on the fusion performance. To address these issues, we propose KnoFuss: an architecture for Semantic Web data integration based on the principles of problem-solving methods. Algorithms dealing with different fusion subtasks are represented as components of a modular architecture, and their capabilities are described formally. This allows the architecture to select appropriate methods and configure them depending on the processed data. In order to handle uncertainty, we propose a novel algorithm based on the Dempster-Shafer belief propagation. KnoFuss employs this algorithm to reason about uncertain data and method results in order to refine the fused knowledge base. Tests show that these solutions lead to improved fusion performance. Finally, we addressed the problem of data fusion in the presence of schema heterogeneity. We extended the KnoFuss framework to exploit results of automatic schema alignment tools and proposed our own schema matching algorithm aimed at facilitating data fusion in the Linked Data environment. We conducted experiments with this approach and obtained a substantial improvement in performance in comparison with public data repositories

    Meta-level argumentation framework for representing and reasoning about disagreement

    Get PDF
    The contribution of this thesis is to the field of Artificial Intelligence (AI), specifically to the sub-field called knowledge engineering. Knowledge engineering involves the computer representation and use of the knowledge and opinions of human experts.In real world controversies, disagreements can be treated as opportunities for exploring the beliefs and reasoning of experts via a process called argumentation. The central claim of this thesis is that a formal computer-based framework for argumentation is a useful solution to the problem of representing and reasoning with multiple conflicting viewpoints.The problem which this thesis addresses is how to represent arguments in domains in which there is controversy and disagreement between many relevant points of view. The reason that this is a problem is that most knowledge based systems are founded in logics, such as first order predicate logic, in which inconsistencies must be eliminated from a theory in order for meaningful inference to be possible from it.I argue that it is possible to devise an argumentation framework by describing one (FORA : Framework for Opposition and Reasoning about Arguments). FORA contains a language for representing the views of multiple experts who disagree or have differing opinions. FORA also contains a suite of software tools which can facilitate debate, exploration of multiple viewpoints, and construction and revision of knowledge bases which are challenged by opposing opinions or evidence.A fundamental part of this thesis is the claim that arguments are meta-level structures which describe the relationships between statements contained in knowledge bases. It is important to make a clear distinction between representations in knowledge bases (the object-level) and representations of the arguments implicit in knowledge bases (the meta-level). FORA has been developed to make this distinction clear and its main benefit is that the argument representations are independent of the object-level representation language. This is useful because it facilitates integration of arguments from multiple sources using different representation languages, and because it enables knowledge engineering decisions to be made about how to structure arguments and chains of reasoning, independently of object-level representation decisions.I argue that abstract argument representations are useful because they can facilitate a variety of knowledge engineering tasks. These include knowledge acquisition; automatic abstraction from existing formal knowledge bases; and construction, rerepresentation, evaluation and criticism of object-level knowledge bases. Examples of software tools contained within FORA are used to illustrate these uses of argumentation structures. The utility of a meta-level framework for argumentation, and FORA in particular, is demonstrated in terms of an important real world controversy concerning the health risks of a group of toxic compounds called aflatoxins

    Strategies for Handling Spatial Uncertainty due to Discretization

    Get PDF
    Geographic information systems (GISs) allow users to analyze geographic phenomena within areas of interest that lead to an understanding of their relationships and thus provide a helpful tool in decision-making. Neglecting the inherent uncertainties in spatial representations may result in undesired misinterpretations. There are several sources of uncertainty contributing to the quality of spatial data within a GIS: imperfections (e.g., inaccuracy and imprecision) and effects of discretization. An example for discretization in the thematic domain is the chosen number of classes to represent a spatial phenomenon (e.g., air temperature). In order to improve the utility of a GIS an inclusion of a formal data quality model is essential. A data quality model stores, specifies, and handles the necessary data required to provide uncertainty information for GIS applications. This dissertation develops a data quality model that associates sources of uncertainty with units of information (e.g., measurement and coverage) in a GIS. The data quality model provides a basis to construct metrics dealing with different sources of uncertainty and to support tools for propagation and cross-propagation. Two specific metrics are developed that focus on two sources of uncertainty: inaccuracy and discretization. The first metric identifies a minimal?resolvable object size within a sampled field of a continuous variable. This metric, called detectability, is calculated as a spatially varying variable. The second metric, called reliability, investigates the effects of discretization on reliability. This metric estimates the variation of an underlying random variable and determines the reliability of a representation. It is also calculated as a spatially varying variable. Subsequently, this metric is used to assess the relationship between the influence of the number of sample points versus the influence of the degree of variation on the reliability of a representation. The results of this investigation show that the variation influences the reliability of a representation more than the number of sample points
    corecore