12,797 research outputs found

    Counterfactual Explanation for Fairness in Recommendation

    Full text link
    Fairness-aware recommendation eliminates discrimination issues to build trustworthy recommendation systems.Explaining the causes of unfair recommendations is critical, as it promotes fairness diagnostics, and thus secures users' trust in recommendation models. Existing fairness explanation methods suffer high computation burdens due to the large-scale search space and the greedy nature of the explanation search process. Besides, they perform score-based optimizations with continuous values, which are not applicable to discrete attributes such as gender and race. In this work, we adopt the novel paradigm of counterfactual explanation from causal inference to explore how minimal alterations in explanations change model fairness, to abandon the greedy search for explanations. We use real-world attributes from Heterogeneous Information Networks (HINs) to empower counterfactual reasoning on discrete attributes. We propose a novel Counterfactual Explanation for Fairness (CFairER) that generates attribute-level counterfactual explanations from HINs for recommendation fairness. Our CFairER conducts off-policy reinforcement learning to seek high-quality counterfactual explanations, with an attentive action pruning reducing the search space of candidate counterfactuals. The counterfactual explanations help to provide rational and proximate explanations for model fairness, while the attentive action pruning narrows the search space of attributes. Extensive experiments demonstrate our proposed model can generate faithful explanations while maintaining favorable recommendation performance

    On the Substitution of Identicals in Counterfactual Reasoning

    Get PDF
    It is widely held that counterfactuals, unlike attitude ascriptions, preserve the referential transparency of their constituents, i.e., that counterfactuals validate the substitution of identicals when their constituents do. The only putative counterexamples in the literature come from counterpossibles, i.e., counterfactuals with impossible antecedents. Advocates of counterpossibilism, i.e., the view that counterpossibles are not all vacuous, argue that counterpossibles can generate referential opacity. But in order to explain why most substitution inferences into counterfactuals seem valid, counterpossibilists also often maintain that counterfactuals with possible antecedents are transparency‐preserving. I argue that if counterpossibles can generate opacity, then so can ordinary counterfactuals with possible antecedents. Utilizing an analogy between counterfactuals and attitude ascriptions, I provide a counterpossibilist‐friendly explanation for the apparent validity of substitution inferences into counterfactuals. I conclude by suggesting that the debate over counterpossibles is closely tied to questions concerning the extent to which counterfactuals are more like attitude ascriptions and epistemic operators than previously recognized

    (WP 2018-02) Extending Behavioral Economics’ Methodological Critique of Rational Choice Theory to Include Counterfactual Reasoning

    Get PDF
    This paper extends behavioral economics’ realist methodological critique of rational choice theory to include the type of logical reasoning underlying its axiomatic foundations. A purely realist critique ignores Kahneman’s emphasis on how the theory’s axiomatic foundations make it normative. I extend his critique to the theory’s reliance on classical logic, which excludes the concept of possibility employed in counterfactual reasoning. Nudge theory reflects this in employing counterfactual conditionals. This answers the complaint that the Homo sapiens agent conception ultimately reduces to a Homo economicus conception, and also provides grounds for treating Homo sapiens as an adaptive, non-optimizing, reflexive agent

    Is there a reliability challenge for logic?

    Get PDF
    There are many domains about which we think we are reliable. When there is prima facie reason to believe that there is no satisfying explanation of our reliability about a domain given our background views about the world, this generates a challenge to our reliability about the domain or to our background views. This is what is often called the reliability challenge for the domain. In previous work, I discussed the reliability challenges for logic and for deductive inference. I argued for four main claims: First, there are reliability challenges for logic and for deduction. Second, these reliability challenges cannot be answered merely by providing an explanation of how it is that we have the logical beliefs and employ the deductive rules that we do. Third, we can explain our reliability about logic by appealing to our reliability about deduction. Fourth, there is a good prospect for providing an evolutionary explanation of the reliability of our deductive reasoning. In recent years, a number of arguments have appeared in the literature that can be applied against one or more of these four theses. In this paper, I respond to some of these arguments. In particular, I discuss arguments by Paul Horwich, Jack Woods, Dan Baras, Justin Clarke-Doane, and Hartry Field

    When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks

    Full text link
    Discovering and exploiting the causality in deep neural networks (DNNs) are crucial challenges for understanding and reasoning causal effects (CE) on an explainable visual model. "Intervention" has been widely used for recognizing a causal relation ontologically. In this paper, we propose a causal inference framework for visual reasoning via do-calculus. To study the intervention effects on pixel-level features for causal reasoning, we introduce pixel-wise masking and adversarial perturbation. In our framework, CE is calculated using features in a latent space and perturbed prediction from a DNN-based model. We further provide the first look into the characteristics of discovered CE of adversarially perturbed images generated by gradient-based methods \footnote{~~https://github.com/jjaacckkyy63/Causal-Intervention-AE-wAdvImg}. Experimental results show that CE is a competitive and robust index for understanding DNNs when compared with conventional methods such as class-activation mappings (CAMs) on the Chest X-Ray-14 dataset for human-interpretable feature(s) (e.g., symptom) reasoning. Moreover, CE holds promises for detecting adversarial examples as it possesses distinct characteristics in the presence of adversarial perturbations.Comment: Noted our camera-ready version has changed the title. "When Causal Intervention Meets Adversarial Examples and Image Masking for Deep Neural Networks" as the v3 official paper title in IEEE Proceeding. Please use it in your formal reference. Accepted at IEEE ICIP 2019. Pytorch code has released on https://github.com/jjaacckkyy63/Causal-Intervention-AE-wAdvIm

    Ceteris Paribus Laws

    Get PDF
    Laws of nature take center stage in philosophy of science. Laws are usually believed to stand in a tight conceptual relation to many important key concepts such as causation, explanation, confirmation, determinism, counterfactuals etc. Traditionally, philosophers of science have focused on physical laws, which were taken to be at least true, universal statements that support counterfactual claims. But, although this claim about laws might be true with respect to physics, laws in the special sciences (such as biology, psychology, economics etc.) appear to have—maybe not surprisingly—different features than the laws of physics. Special science laws—for instance, the economic law “Under the condition of perfect competition, an increase of demand of a commodity leads to an increase of price, given that the quantity of the supplied commodity remains constant” and, in biology, Mendel's Laws—are usually taken to “have exceptions”, to be “non-universal” or “to be ceteris paribus laws”. How and whether the laws of physics and the laws of the special sciences differ is one of the crucial questions motivating the debate on ceteris paribus laws. Another major, controversial question concerns the determination of the precise meaning of “ceteris paribus”. Philosophers have attempted to explicate the meaning of ceteris paribus clauses in different ways. The question of meaning is connected to the problem of empirical content, i.e., the question whether ceteris paribus laws have non-trivial and empirically testable content. Since many philosophers have argued that ceteris paribus laws lack empirically testable content, this problem constitutes a major challenge to a theory of ceteris paribus laws

    Subjective Causality and Counterfactuals in the Social Sciences

    Get PDF
    The article explores the role that subjective evidence of causality and associated counterfactuals and counterpotentials might play in the social sciences where comparative cases are scarce. This scarcity rules out statistical inference based upon frequencies and usually invites in-depth ethnographic studies. Thus, if causality is to be preserved in such situations, a conception of ethnographic causal inference is required. Ethnographic causality inverts the standard statistical concept of causal explanation in observational studies, whereby comparison and generalization, across a sample of cases, are both necessary prerequisites for any causal inference. Ethnographic causality allows, in contrast, for causal explanation prior to any subsequent comparison or generalization
    • 

    corecore