1,414 research outputs found

    The posterity of Zadeh's 50-year-old paper: A retrospective in 101 Easy Pieces – and a Few More

    Get PDF
    International audienceThis article was commissioned by the 22nd IEEE International Conference of Fuzzy Systems (FUZZ-IEEE) to celebrate the 50th Anniversary of Lotfi Zadeh's seminal 1965 paper on fuzzy sets. In addition to Lotfi's original paper, this note itemizes 100 citations of books and papers deemed “important (significant, seminal, etc.)” by 20 of the 21 living IEEE CIS Fuzzy Systems pioneers. Each of the 20 contributors supplied 5 citations, and Lotfi's paper makes the overall list a tidy 101, as in “Fuzzy Sets 101”. This note is not a survey in any real sense of the word, but the contributors did offer short remarks to indicate the reason for inclusion (e.g., historical, topical, seminal, etc.) of each citation. Citation statistics are easy to find and notoriously erroneous, so we refrain from reporting them - almost. The exception is that according to Google scholar on April 9, 2015, Lotfi's 1965 paper has been cited 55,479 times

    Meta-level argumentation framework for representing and reasoning about disagreement

    Get PDF
    The contribution of this thesis is to the field of Artificial Intelligence (AI), specifically to the sub-field called knowledge engineering. Knowledge engineering involves the computer representation and use of the knowledge and opinions of human experts.In real world controversies, disagreements can be treated as opportunities for exploring the beliefs and reasoning of experts via a process called argumentation. The central claim of this thesis is that a formal computer-based framework for argumentation is a useful solution to the problem of representing and reasoning with multiple conflicting viewpoints.The problem which this thesis addresses is how to represent arguments in domains in which there is controversy and disagreement between many relevant points of view. The reason that this is a problem is that most knowledge based systems are founded in logics, such as first order predicate logic, in which inconsistencies must be eliminated from a theory in order for meaningful inference to be possible from it.I argue that it is possible to devise an argumentation framework by describing one (FORA : Framework for Opposition and Reasoning about Arguments). FORA contains a language for representing the views of multiple experts who disagree or have differing opinions. FORA also contains a suite of software tools which can facilitate debate, exploration of multiple viewpoints, and construction and revision of knowledge bases which are challenged by opposing opinions or evidence.A fundamental part of this thesis is the claim that arguments are meta-level structures which describe the relationships between statements contained in knowledge bases. It is important to make a clear distinction between representations in knowledge bases (the object-level) and representations of the arguments implicit in knowledge bases (the meta-level). FORA has been developed to make this distinction clear and its main benefit is that the argument representations are independent of the object-level representation language. This is useful because it facilitates integration of arguments from multiple sources using different representation languages, and because it enables knowledge engineering decisions to be made about how to structure arguments and chains of reasoning, independently of object-level representation decisions.I argue that abstract argument representations are useful because they can facilitate a variety of knowledge engineering tasks. These include knowledge acquisition; automatic abstraction from existing formal knowledge bases; and construction, rerepresentation, evaluation and criticism of object-level knowledge bases. Examples of software tools contained within FORA are used to illustrate these uses of argumentation structures. The utility of a meta-level framework for argumentation, and FORA in particular, is demonstrated in terms of an important real world controversy concerning the health risks of a group of toxic compounds called aflatoxins

    Proof beyond a context-relevant doubt. A structural analysis of the standard of proof in criminal adjudication

    Get PDF
    The present article proceeds from the mainstream view that the conceptual framework underpinning adversarial systems of criminal adjudication, i.e. a mixture of common-sense philosophy and probabilistic analysis, is unsustainable. In order to provide fact-finders with an operable structure of justification, we need to turn to epistemology once again. The article proceeds in three parts. First, I examine the structural features of justification and how various theories have attempted to overcome Agrippa’s trilemma. Second, I put Inferential Contextualism to the test and show that a defeasible structure of justification allocating epistemic rights and duties to all participants of an inquiry manages to dissolve the problem of scepticism. Third, I show that our epistemic practice already embodies a contextualist mechanism. Our problem was not that our Standard of Proof is inoperable but that it was not adequately conceptualized. Contextualism provides the framework to articulate the abovementioned practice and to treat ‘reasonable doubts’ as a mechanism which we can now describe in detail. The seemingly insurmountable problem with our efforts to define the concept “reasonable doubts” was the fact that we have been conflating the surface features of this mechanism and its internal structure, i.e. the rules for its use

    Certification Considerations for Adaptive Systems

    Get PDF
    Advanced capabilities planned for the next generation of aircraft, including those that will operate within the Next Generation Air Transportation System (NextGen), will necessarily include complex new algorithms and non-traditional software elements. These aircraft will likely incorporate adaptive control algorithms that will provide enhanced safety, autonomy, and robustness during adverse conditions. Unmanned aircraft will operate alongside manned aircraft in the National Airspace (NAS), with intelligent software performing the high-level decision-making functions normally performed by human pilots. Even human-piloted aircraft will necessarily include more autonomy. However, there are serious barriers to the deployment of new capabilities, especially for those based upon software including adaptive control (AC) and artificial intelligence (AI) algorithms. Current civil aviation certification processes are based on the idea that the correct behavior of a system must be completely specified and verified prior to operation. This report by Rockwell Collins and SIFT documents our comprehensive study of the state of the art in intelligent and adaptive algorithms for the civil aviation domain, categorizing the approaches used and identifying gaps and challenges associated with certification of each approach

    Pseudo-contractions as Gentle Repairs

    Get PDF
    Updating a knowledge base to remove an unwanted consequence is a challenging task. Some of the original sentences must be either deleted or weakened in such a way that the sentence to be removed is no longer entailed by the resulting set. On the other hand, it is desirable that the existing knowledge be preserved as much as possible, minimising the loss of information. Several approaches to this problem can be found in the literature. In particular, when the knowledge is represented by an ontology, two different families of frameworks have been developed in the literature in the past decades with numerous ideas in common but with little interaction between the communities: applications of AGM-like Belief Change and justification-based Ontology Repair. In this paper, we investigate the relationship between pseudo-contraction operations and gentle repairs. Both aim to avoid the complete deletion of sentences when replacing them with weaker versions is enough to prevent the entailment of the unwanted formula. We show the correspondence between concepts on both sides and investigate under which conditions they are equivalent. Furthermore, we propose a unified notation for the two approaches, which might contribute to the integration of the two areas

    What If? The Exploration of an Idea

    Get PDF
    A crucial question here is what, exactly, the conditional in the naive truth/set comprehension principles is. In 'Logic of Paradox', I outlined two options. One is to take it to be the material conditional of the extensional paraconsistent logic LP. Call this "Strategy 1". LP is a relatively weak logic, however. In particular, the material conditional does not detach. The other strategy is to take it to be some detachable conditional. Call this "Strategy 2". The aim of the present essay is to investigate Stragey 1. It is not to advocate it. The work is simply an extended exploration of the strategy, its strengths, its weaknesses, and the various dierent ways in which it may be implemented. In the first part of the paper I will set up the appropriate background details. In the second, I will look at the strategy as it applies to the semantic paradoxes. In the third I will look at how it applies to the set-theoretic paradoxes

    Knowledge Acquisition from Data Bases

    Get PDF
    Centre for Intelligent Systems and their ApplicationsGrant No.6897502Knowledge acquisition from databases is a research frontier for both data base technology and machine learning (ML) techniques,and has seen sustained research over recent years.It also acts as a link between the two fields,thus offering a dual benefit. Firstly, since database technology has already found wide application in many fields ML research obviously stands to gain from this greater exposure and established technological foundation. Secondly, ML techniques can augment the ability of existing database systems to represent acquire,and process a collection of expertise such as those which form part of the semantics of many advanced applications (e.gCAD/CAM).The major contribution of this thesis is the introduction of an effcient induction algorithm to facilitate the acquisition of such knowledge from databases. There are three typical families of inductive algorithms: the generalisation- specialisation based AQ11-like family, the decision tree based ID3-like family,and the extension matrix based family. A heuristic induction algorithm, HCV based on the newly-developed extension matrix approach is described in this thesis. By dividing the positive examples (PE) of a specific class in a given example set into intersect in groups and adopting a set of strategies to find a heuristic conjunctive rule in each group which covers all the group's positiv examples and none of the negativ examples(NE),HCV can find rules in the form of variable-valued logic for PE against NE in low-order polynomial time. The rules generated in HCV are shown empirically to be more compact than the rules produced by AQ1-like algorithms and the decision trees produced by the ID3-like algorithms. KEshell2, an intelligent learning database system, which makes use of the HCV algorithm and couples ML techniques with database and knowledgebase technology, is also described

    Automated Deduction – CADE 28

    Get PDF
    This open access book constitutes the proceeding of the 28th International Conference on Automated Deduction, CADE 28, held virtually in July 2021. The 29 full papers and 7 system descriptions presented together with 2 invited papers were carefully reviewed and selected from 76 submissions. CADE is the major forum for the presentation of research in all aspects of automated deduction, including foundations, applications, implementations, and practical experience. The papers are organized in the following topics: Logical foundations; theory and principles; implementation and application; ATP and AI; and system descriptions

    Casual Reasoning : A Social Ecological Look at Human Cognition and Common Sense

    Get PDF
    This thesis promotes a pragmatist and ecological approach to human cognition and concepts. Namely, that our conceptual system primarily tracks affordances and other causal properties that have pragmatic relevance to us as embodied and active agents. The bulk of the work aims to show how various research programs in cognitive psychology naturally intersect and complement each other under this theoretical standpoint, which is influenced by enactivist and embodied approaches to cognitive science as well as linguistic pragmatism. The term ”ecological” in the title refers to an approach that emphasizes the interaction of agents and their environment and ”social ecology” means that this includes social interaction between agents and that our material environment is extensively a cultural product, constantly reproduced and altered through cultural behavior. The extent of the argument is not supposed to be confined to the theoretical psychology and philosophy of science but has somewhat wider motives pertaining to philosophy of language and knowledge. I do not attempt to reform extant theories of cognitive processing and representation (unless arguments against logical computationalism are still considered reformist these days) but to explain the nature of conceptual understanding. I take it that having a concept is principally not having a particular information structure in one’s brain but rather a set of interlocking capacities that support intentional action. In effect, I claim that conceptual understanding should be understood as a cognitive skill and psychological research on concepts should not identify concepts as static information structures but as capacities which are integral parts of procedural knowledge that support skillful know-how in situated action. Conceptual mental representations deal with information but such information structures are active constructs that cannot be understood without pragmatic and ecological perspective on human cognition. I argue for the claim, earlier proposed for instance by Eleanor Rosch, that contexts or situations are the proper unit that categorization research needs to concentrate on. In accordance with Edouard Machery’s well-known claim, I conceive classical category theories of cognitive science, namely prototype, exemplar, and knowledge accounts, to tap real cognitive phenomena; however, pace Machery I aim to show that they do not form distinct conceptual representations but rather participate in human conceptual capacities as interlocking component processes. The main problem with theories that emphasize situated direct interaction with the environment is to explain abstract and symbolic reasoning. One theoretically promising way to resolve the issue is to invoke some version of the dual-process theories of cognition; that is, to explain rule-based, theoretical, and symbolic reasoning by resorting to a distinct cognitive system, which is more or less dedicated to those kind of tasks. While dual-process theories seem to license such a move, they can work only as a partial solution because expert scientific reasoning , for example, necessitates implicit skills just like any area of expertise. Second, commonsense reasoning is partly schematic and utilizes theoretical concepts. As an alternative explanation, I offer a hypothesis influenced by philosophical linguistic pragmatism which posits that discursive reasoning is incrementally learned tacit know-how in cultural praxis, which determines how we understand linguistic concepts. This interactive know-how exploits mostly the same cognitive mechanisms as situated and pragmatic procedural knowledge. The explanation has immediate implications for the analytic philosophy of language. When we interpret a text or engage in conceptual analysis, our conscious conceptual interpretation of the associated contents is a product of implicit processes intimately tied with procedural knowledge; in short, explicit know-that is rooted in implicit know-how
    • 

    corecore