1,981 research outputs found

    Evidentialist Foundationalist Argumentation in Multi-Agent Systems

    Get PDF
    This dissertation focuses on the explicit grounding of reasoning in evidence directly sensed from the physical world. Based on evidence from human problem solving and successes, this is a straightforward basis for reasoning: to solve problems in the physical world, the information required for solving them must also come from the physical world. What is less straightforward is how to structure the path from evidence to conclusions. Many approaches have been applied to evidence-based reasoning, including probabilistic graphical models and Dempster-Shafer theory. However, with some exceptions, these traditional approaches are often employed to establish confidence in a single binary conclusion, like whether or not there is a blizzard, rather than developing complex groups of scalar conclusions, like where a blizzard's center is, what area it covers, how strong it is, and what components it has. To form conclusions of the latter kind, we employ and further develop the approach of Computational Argumentation. Specifically, this dissertation develops a novel approach to evidence-based argumentation called Evidentialist Foundationalist Argumentation (EFA). The method is a formal instantiation of the well-established Argumentation Service Platform with Integrated Components (ASPIC) framework. There are two primary approaches to Computational Argumentation. One approach is structured argumentation where arguments are structured with premises, inference rules, conclusions, and arguments based on the conclusions of other arguments, creating a tree-like structure. The other approach is abstract argumentation where arguments interact at a higher level through an attack relation. ASPIC unifies the two approaches. EFA instantiates ASPIC specifically for the purpose of reasoning about physical evidence in the form of sensor data. By restricting ASPIC specifically to sensor data, special philosophical and computational advantages are gained. Specifically, all premises in the system (evidence) can be treated as firmly grounded axioms and all arguments' conclusions can be numerically calculated directly from their premises. EFA could be used as the basis for well-justified, transparent reasoning in many domains including engineering, law, business, medicine, politics, and education. To test its utility as a basis for Computational Argumentation, we apply EFA to a Multi-Agent System working in the problem domain of Sensor Webs on the specific problem of Decentralized Sensor Fusion. In the Multi-Agent Decentralized Sensor Fusion problem, groups of individual agents are assigned to sensor stations that are distributed across a geographical area, forming a Sensor Web. The goal of the system is to strategically share sensor readings between agents to increase the accuracy of each individual agent's model of the geophysical sensing situation. For example, if there is a severe storm, a goal may be for each agent to have an accurate model of the storm's heading, severity, and focal points of activity. Also, since the agents are controlling a Sensor Web, another goal is to use communication judiciously so as to use power efficiently. To meet these goals, we design a Multi-Agent System called Investigative Argumentation-based Negotiating Agents (IANA). Agents in IANA use EFA as the basis for establishing arguments to model geophysical situations. Upon gathering evidence in the form of sensor readings, the agents form evidence-based arguments using EFA. The agents systematically compare the conclusions of their arguments to other agents. If the agents sufficiently agree on the geophysical situation, they end communication. If they disagree, then they share the evidence for their conclusions, consuming communication resources with the goal of increasing accuracy. They execute this interaction using a Share on Disagreement (SoD) protocol. IANA is evaluated against two other Multi-Agent System approaches on the basis of accuracy and communication costs, using historical real-world weather data. The first approach is all-to-all communication, called the Complete Data Sharing (CDS) approach. In this system, agents share all observations, maximizing accuracy but at a high communication cost. The second approach is based on Kalman Filtering of conclusions and is called the Conclusion Negotiation Only (CNO) approach. In this system, agents do not share any observations, and instead try to infer the geophysical state based only on each other's conclusions. This approach saves communication costs but sacrifices accuracy. The results of these experiments have been statistically analyzed using omega-squared effect sizes produced by ANOVA with p-values < 0.05. The IANA system was found to outperform the CDS system for message cost with high effect sizes. The CDS system outperformed the IANA system for accuracy with only small effect sizes. The IANA system was found to outperform the CNO system for accuracy with mostly high and medium effect sizes. The CNO system outperformed the IANA system for message costs with only small effect sizes. Given these results, the IANA system is preferable for most of the testing scenarios for the problem solved in this dissertation

    Heuristic Satisficing Inferential Decision Making in Human and Robot Active Perception

    Full text link
    Inferential decision-making algorithms typically assume that an underlying probabilistic model of decision alternatives and outcomes may be learned a priori or online. Furthermore, when applied to robots in real-world settings they often perform unsatisfactorily or fail to accomplish the necessary tasks because this assumption is violated and/or they experience unanticipated external pressures and constraints. Cognitive studies presented in this and other papers show that humans cope with complex and unknown settings by modulating between near-optimal and satisficing solutions, including heuristics, by leveraging information value of available environmental cues that are possibly redundant. Using the benchmark inferential decision problem known as ``treasure hunt", this paper develops a general approach for investigating and modeling active perception solutions under pressure. By simulating treasure hunt problems in virtual worlds, our approach learns generalizable strategies from high performers that, when applied to robots, allow them to modulate between optimal and heuristic solutions on the basis of external pressures and probabilistic models, if and when available. The result is a suite of active perception algorithms for camera-equipped robots that outperform treasure-hunt solutions obtained via cell decomposition, information roadmap, and information potential algorithms, in both high-fidelity numerical simulations and physical experiments. The effectiveness of the new active perception strategies is demonstrated under a broad range of unanticipated conditions that cause existing algorithms to fail to complete the search for treasures, such as unmodelled time constraints, resource constraints, and adverse weather (fog)

    Logical models for bounded reasoners

    Get PDF
    This dissertation aims at the logical modelling of aspects of human reasoning, informed by facts on the bounds of human cognition. We break down this challenge into three parts. In Part I, we discuss the place of logical systems for knowledge and belief in the Rationality Debate and we argue for systems that formalize an alternative picture of rationality -- one wherein empirical facts have a key role (Chapter 2). In Part II, we design logical models that encode explicitly the deductive reasoning of a single bounded agent and the variety of processes underlying it. This is achieved through the introduction of a dynamic, resource-sensitive, impossible-worlds semantics (Chapter 3). We then show that this type of semantics can be combined with plausibility models (Chapter 4) and that it can be instrumental in modelling the logical aspects of System 1 (“fast”) and System 2 (“slow”) cognitive processes (Chapter 5). In Part III, we move from single- to multi-agent frameworks. This unfolds in three directions: (a) the formation of beliefs about others (e.g. due to observation, memory, and communication), (b) the manipulation of beliefs (e.g. via acts of reasoning about oneself and others), and (c) the effect of the above on group reasoning. These questions are addressed, respectively, in Chapters 6, 7, and 8. We finally discuss directions for future work and we reflect on the contribution of the thesis as a whole (Chapter 9)

    Type-2 Fuzzy Logic for Edge Detection of Gray Scale Images

    Get PDF

    A cognitive exploration of the “non-visual” nature of geometric proofs

    Get PDF
    Why are Geometric Proofs (Usually) “Non-Visual”? We asked this question as a way to explore the similarities and differences between diagrams and text (visual thinking versus language thinking). Traditional text-based proofs are considered (by many to be) more rigorous than diagrams alone. In this paper we focus on human perceptual-cognitive characteristics that may encourage textual modes for proofs because of the ergonomic affordances of text relative to diagrams. We suggest that visual-spatial perception of physical objects, where an object is perceived with greater acuity through foveal vision rather than peripheral vision, is similar to attention navigating a conceptual visual-spatial structure. We suggest that attention has foveal-like and peripheral-like characteristics and that textual modes appeal to what we refer to here as foveal-focal attention, an extension of prior work in focused attention

    ISIPTA'07: Proceedings of the Fifth International Symposium on Imprecise Probability: Theories and Applications

    Get PDF
    B

    Inference Rules in some temporal multi-epistemic propositional logics

    Get PDF
    Multi-modal logics are among the best tools developed so far to analyse human reasoning and agents’ interactions. Recently multi-modal logics have found several applications in Artificial Intelligence (AI) and Computer Science (CS) in the attempt to formalise reasoning about the behavior of programs. Modal logics deal with sentences that are qualified by modalities. A modality is any word that could be added to a statement p to modify its mode of truth. Temporal logics are obtained by joining tense operators to the classical propositional calculus, giving rise to a language very effective to describe the flow of time. Epistemic logics are suitable to formalize reasoning about agents possessing a certain knowledge. Combinations of temporal and epistemic logics are particularly effective in describing the interaction of agents through the flow of time. Although not yet fully investigated, this approach has found many fruitful applications. These are concerned with the development of systems modelling reasoning about knowledge and space, reasoning under uncertainty, multi-agent reasoning et c. Despite their power, multi modal languages cannot handle a changing environment. But this is exactly what is required in the case of human reasoning, computation and multi-agent environment. For this purpose, inference rules are a core instrument. So far, the research in this field has investigated many modal and superintuitionistic logics. However, for the case of multi-modal logics, not much is known concerning admissible inference rules. In our research we extend the investigation to some multi-modal propositional logics which combine tense and knowledge modalities. As far as we are concerned, these systems have never been investigated before. In particular we start by defining our systems semantically; further we prove such systems to enjoy the effective finite model property and to be decidable with respect to their admissible inference rules. We turn then our attention to the syntactical side and we provide sound and complete axiomatic systems. We conclude our dissertation by introducing the reader to the piece of research we are currently working on. Our original results can be found in [9, 4, 11] (see Appendix A). They have also been presented by the author at some international conferences and schools (see [8, 10, 5, 7, 6] and refer to Appendix B for more details). Our project concerns philosophy, mathematics, AI and CS. Modern applications of logic in CS and AI often require languages able to represent knowledge about dynamic systems. Multi-modal logics serve these applications in a very efficient way, and we would absorb and develop some of these techniques to represent logical consequences in artificial intelligence and computation

    Federal Rule of Evidence 403: Observations on the Nature of Unfairly Prejudicial Evidence

    Get PDF
    The object of this article is to identify what makes evidence unfairly prejudicial. The first part analyzes the language of and the policies behind Rule 403, and demonstrates that the courts\u27 current ad hoc approach has frustrated those policies and prevented the rule from operating as written. Part II analyzes the nature of unfairly prejudicial evidence in light of the policies intended to be advanced by Rule 403. That part concludes that evidence may be considered unfairly prejudicial when it has a tendency to cause the trier of fact to commit an inferential error. The third part describes recent empirical research in cognitive psychology which could help courts identify evidence that tends to induce inferential error. Part IV demonstrates how this research might be applied to the type of evidence most frequently analyzed for unfair prejudice: evidence of other crimes or bad acts. The conclusion makes the modest proposal that the law of evidence pay attention to how people think

    Artificial Cognition for Social Human-Robot Interaction: An Implementation

    Get PDF
    © 2017 The Authors Human–Robot Interaction challenges Artificial Intelligence in many regards: dynamic, partially unknown environments that were not originally designed for robots; a broad variety of situations with rich semantics to understand and interpret; physical interactions with humans that requires fine, low-latency yet socially acceptable control strategies; natural and multi-modal communication which mandates common-sense knowledge and the representation of possibly divergent mental models. This article is an attempt to characterise these challenges and to exhibit a set of key decisional issues that need to be addressed for a cognitive robot to successfully share space and tasks with a human. We identify first the needed individual and collaborative cognitive skills: geometric reasoning and situation assessment based on perspective-taking and affordance analysis; acquisition and representation of knowledge models for multiple agents (humans and robots, with their specificities); situated, natural and multi-modal dialogue; human-aware task planning; human–robot joint task achievement. The article discusses each of these abilities, presents working implementations, and shows how they combine in a coherent and original deliberative architecture for human–robot interaction. Supported by experimental results, we eventually show how explicit knowledge management, both symbolic and geometric, proves to be instrumental to richer and more natural human–robot interactions by pushing for pervasive, human-level semantics within the robot's deliberative system
    • …
    corecore