48,612 research outputs found

    Causal Reasoning: Charting a Revolutionary Course for Next-Generation AI-Native Wireless Networks

    Full text link
    Despite the basic premise that next-generation wireless networks (e.g., 6G) will be artificial intelligence (AI)-native, to date, most existing efforts remain either qualitative or incremental extensions to existing ``AI for wireless'' paradigms. Indeed, creating AI-native wireless networks faces significant technical challenges due to the limitations of data-driven, training-intensive AI. These limitations include the black-box nature of the AI models, their curve-fitting nature, which can limit their ability to reason and adapt, their reliance on large amounts of training data, and the energy inefficiency of large neural networks. In response to these limitations, this article presents a comprehensive, forward-looking vision that addresses these shortcomings by introducing a novel framework for building AI-native wireless networks; grounded in the emerging field of causal reasoning. Causal reasoning, founded on causal discovery, causal representation learning, and causal inference, can help build explainable, reasoning-aware, and sustainable wireless networks. Towards fulfilling this vision, we first highlight several wireless networking challenges that can be addressed by causal discovery and representation, including ultra-reliable beamforming for terahertz (THz) systems, near-accurate physical twin modeling for digital twins, training data augmentation, and semantic communication. We showcase how incorporating causal discovery can assist in achieving dynamic adaptability, resilience, and cognition in addressing these challenges. Furthermore, we outline potential frameworks that leverage causal inference to achieve the overarching objectives of future-generation networks, including intent management, dynamic adaptability, human-level cognition, reasoning, and the critical element of time sensitivity

    Using resource graphs to represent conceptual change

    Full text link
    We introduce resource graphs, a representation of linked ideas used when reasoning about specific contexts in physics. Our model is consistent with previous descriptions of resources and coordination classes. It can represent mesoscopic scales that are neither knowledge-in-pieces or large-scale concepts. We use resource graphs to describe several forms of conceptual change: incremental, cascade, wholesale, and dual construction. For each, we give evidence from the physics education research literature to show examples of each form of conceptual change. Where possible, we compare our representation to models used by other researchers. Building on our representation, we introduce a new form of conceptual change, differentiation, and suggest several experimental studies that would help understand the differences between reform-based curricula.Comment: 27 pages, 14 figures, no tables. Submitted for publication to the Physical Review Special Topics Physics Education Research on March 8, 200

    Towards ending the animal cognition war: a three-dimensional model of causal cognition

    Get PDF
    Debates in animal cognition are frequently polarized between the romantic view that some species have human-like causal understanding and the killjoy view that human causal reasoning is unique. These apparently endless debates are often characterized by conceptual confusions and accusations of straw-men positions. What is needed is an account of causal understanding that enables researchers to investigate both similarities and differences in cognitive abilities in an incremental evolutionary framework. Here we outline the ways in which a three-dimensional model of causal understanding fulfills these criteria. We describe how this approach clarifies what is at stake, illuminates recent experiments on both physical and social cognition, and plots a path for productive future research that avoids the romantic/killjoy dichotomy.Introduction Dissecting disagreement - Principles of interpretation - A big misunderstanding and the conceptual question The conceptual space of causal cognition - Causal information -- Difference‑making accounts of causality -- Geometrical–mechanical accounts - Difference‑making and geometrical–mechanical aspects of human concept of causation - Understanding causality - Parameters of causal cognition -- a) Sources of causal information -- b) Integration -- c) Explicitness From causal cognition to causal understanding - A three‑dimensional model of causal cognition - The evolution of causal cognition and the nature of causal understanding - The metrics of the model and future research Conclusio

    Counterfactual Causality from First Principles?

    Full text link
    In this position paper we discuss three main shortcomings of existing approaches to counterfactual causality from the computer science perspective, and sketch lines of work to try and overcome these issues: (1) causality definitions should be driven by a set of precisely specified requirements rather than specific examples; (2) causality frameworks should support system dynamics; (3) causality analysis should have a well-understood behavior in presence of abstraction.Comment: In Proceedings CREST 2017, arXiv:1710.0277

    The Drink You Have When You’re Not Having a Drink

    Get PDF
      The Architecture of the Mind is itself built on foundations that deserve probing. In this brief commentary I focus on these foundations—Carruthers’ conception of modularity, his arguments for thinking that the mind is massively modular in structure, and his view of human cognitive architectur
    • …
    corecore