26,394 research outputs found

    A structural model interpretation of Wright's NESS test

    Get PDF
    Although understanding causation is an essential part of nearly every problem domain, it has resisted formal treatment in the languages of logic, probability, and even statistics. Autonomous artificially intelligent agents need to be able to reason about cause and effect. One approach is to provide the agent with formal, computational notions of causality that enable the agent to deduce cause and effect relationships from observations. During the 1990s, formal notions of causality were pursued within the AI community by many researchers, notably by Judea Pearl. Pearl developed the formal language of structural models for reasoning about causation. Among the problems he addressed in this formalism was a problem common to both AI and law, the attribution of causal responsibility or actual causation. Pearl and then Halpern and Pearl developed formal definitions of actual causation in the language of structural models. Within the law, the traditional test for attributing causal responsibility is the counterfactual "but-for" test, which asks whether, but for the defendant's wrongful act, the injury complained of would have occurred. This definition conforms to common intuitions regarding causation in most cases, but gives non-intuitive results in more complex situations where two or more potential causes are present. To handle such situations, Richard Wright defined the NESS Test. Pearl claims that the structural language is an appropriate language to capture the intuitions that motivate the NESS test. While Pearl's structural language is adequate to formalize the NESS test, a recent result of Hopkins and Pearl shows that the Halpern and Pearl definition fails to do so, and this thesis develops an alternative structural definition to formalize the NESS test

    Navigating liability in the age of AI: burden of proof, standard of proof, and causation challenges

    Get PDF
    This contribution focuses on exploring the complex legal issues surrounding AI-related damages from a practical perspective as it will: - Analyze how the burden of proof is allocated in cases involving AI-caused damages and develop a discussion on whether traditional legal standards need to be adapted to account for the unique challenges posed by AI technologies. - Examine the appropriate standard of proof required to establish liability in AI-related cases and consider whether a preponderance of evidence, clear and convincing evidence, or a higher standard should be applied in different scenarios. - Explore the difficulties in establishing causation in AI-related damages that will develop a discussion of how causation can be attributed to AI systems, especially in cases where multiple parties are involved or where AI operates autonomously. The method used in this contribution will consist of conducting a comparative analysis of how different jurisdictions in the EU handle liability and proof standards in AI-related cases with the highlighting of any emerging trends or best practices. These questions certainly open a discussion of the potential future developments in AI liability law, considering the rapid advancements in AI technology and how might legal standards need to adapt as AI systems become more sophisticated. By focusing on these practical aspects of AI liability, this research can offer valuable insights into how the legal system can effectively address the challenges posed by AI technology while ensuring fair and just outcomes for all parties involved

    Actual Causation in CP-logic

    Full text link
    Given a causal model of some domain and a particular story that has taken place in this domain, the problem of actual causation is deciding which of the possible causes for some effect actually caused it. One of the most influential approaches to this problem has been developed by Halpern and Pearl in the context of structural models. In this paper, I argue that this is actually not the best setting for studying this problem. As an alternative, I offer the probabilistic logic programming language of CP-logic. Unlike structural models, CP-logic incorporates the deviant/default distinction that is generally considered an important aspect of actual causation, and it has an explicitly dynamic semantics, which helps to formalize the stories that serve as input to an actual causation problem

    AI ヲ メグル ホウテキ インガ カンケイ ニ カンスル イチコウサツ ハート オノレ ノ ホウテキ インガ カンケイロン ヲ シュッパツテン ニ

    Full text link
    本稿は、AIをめぐる法的因果関係について、法哲学者H. L. A. ハートと民法学者T. オノレが『法における因果性』において展開した法的因果関係論を基に検討したものである。AIをめぐる法的因果関係は、近年注目を集めるAIをめぐる法的責任と密接な関わりがある。ここでは、AIの開発やAIの利用といった人間の行為とAIの判断・指示との関係性が問題となる。そこで本稿では、これらの関係性について、ハート=オノレが提示した人間の行為と他の人間の行為との関係性についての理論を応用し検討した。その結果、ハート=オノレの法的因果関係論が用いる判断基準の曖昧さ、AIと人間の判断のどちらが信頼に足るものかを決めることの困難さ、AI開発において満たすべき基準を定めることの困難さが、AIをめぐる法的因果関係の根幹にある問題であることが明らかになった。これらの諸問題の背景には、AIを対象とする場合に、法的因果関係の存否を判断する際の基準となる「通常」という規範的な概念の動揺がある。これは従来の法的因果関係論とAIが立脚する世界観の違いに起因するものであり、AIが立脚する世界観を共有する法的因果関係論の可能性を検討することが求められる。In this article, the author examined legal causation on Artificial Intelligence (AI) by applying the legal causation theory of H. L. A. Hart and T. Honoré in Causation in the Law, Second Edition. Legal causation on AI has a close connection with legal responsibility on AI, which attracts many researchers recently. In legal causation on AI, relations between developing AI or using AI and estimations or directions of AI become problems. Then the author examined these relations by applying the theory of Hart and Honoré about the relation between a human act and the other human act. As a result, the author found three problems: ambiguousness of criteria for judgment of Hart and Honoré’s legal causation theory, difficulties of deciding which is reliable decision human or AI, and difficulties of setting up standards of developing AI. Behind these problems, there is an upset of normative concept “normal” using criteria when examining causation, especially on AI. This issue arises from the difference of visions of the world between existing legal causation theory and AI. Then legal causation theory, which shares AI’s vision of the world, is needed to consider

    Causality re-established

    Get PDF
    Causality never gained the status of a "law" or "principle" in physics. Some recent literature even popularized the false idea that causality is a notion that should be banned from theory. Such misconception relies on an alleged universality of reversibility of laws of physics, based either on determinism of classical theory, or on the multiverse interpretation of quantum theory, in both cases motivated by mere interpretational requirements for realism of the theory. Here, I will show that a properly defined unambiguous notion of causality is a theorem of quantum theory, which is also a falsifiable proposition of the theory. Such causality notion appeared in the literature within the framework of operational probabilistic theories. It is a genuinely theoretical notion, corresponding to establish a definite partial order among events, in the same way as we do by using the future causal cone on Minkowski space. The causality notion is logically completely independent of the misidentified concept of "determinism", and, being a consequence of quantum theory, is ubiquitous in physics. In addition, as classical theory can be regarded as a restriction of quantum theory, causality holds also in the classical case, although the determinism of the theory trivializes it. I then conclude arguing that causality naturally establishes an arrow of time. This implies that the scenario of the "Block Universe" and the connected "Past Hypothesis" are incompatible with causality, and thus with quantum theory: they both are doomed to remain mere interpretations and, as such, not falsifiable, similar to the hypothesis of "super-determinism". This article is part of a discussion meeting issue "Foundations of quantum mechanics and their impact on contemporary society".Comment: Presented at the Royal Society of London, on 11/12/ 2017, at the conference "Foundations of quantum mechanics and their impact on contemporary society". To appear on Philosophical Transactions of the Royal Society

    How much of commonsense and legal reasoning is formalizable? A review of conceptual obstacles

    Get PDF
    Fifty years of effort in artificial intelligence (AI) and the formalization of legal reasoning have produced both successes and failures. Considerable success in organizing and displaying evidence and its interrelationships has been accompanied by failure to achieve the original ambition of AI as applied to law: fully automated legal decision-making. The obstacles to formalizing legal reasoning have proved to be the same ones that make the formalization of commonsense reasoning so difficult, and are most evident where legal reasoning has to meld with the vast web of ordinary human knowledge of the world. Underlying many of the problems is the mismatch between the discreteness of symbol manipulation and the continuous nature of imprecise natural language, of degrees of similarity and analogy, and of probabilities

    Gaps for God?

    Get PDF
    Wetensch. publicati

    The concept of free will as an infinite metatheoretic recursion

    Get PDF
    It is argued that the concept of free will, like the concept of truth in formal languages, requires a separation between an object level and a meta-level for being consistently defined. The Jamesian two-stage model, which deconstructs free will into the causally open "free" stage with its closure in the "will" stage, is implicitly a move in this direction. However, to avoid the dilemma of determinism, free will additionally requires an infinite regress of causal meta-stages, making free choice a hypertask. We use this model to define free will of the rationalist-compatibilist type. This is shown to provide a natural three-way distinction between quantum indeterminism, freedom and free will, applicable respectively to artificial intelligence (AI), animal agents and human agents. We propose that the causal hierarchy in our model corresponds to a hierarchy of Turing uncomputability. Possible neurobiological and behavioral tests to demonstrate free will experimentally are suggested. Ramifications of the model for physics, evolutionary biology, neuroscience, neuropathological medicine and moral philosophy are briefly outlined.Comment: Accepted in INDECS (close to the accepted version
    corecore