41,303 research outputs found

    (WP 2016-05) Hodgson, Cumulative Causation, and Reflexive Economic Agents

    Get PDF
    This paper examines Geoff Hodgson’s interpretation of Veblen in agency-structure terms, and argues it produces a conception of reflexive economic agents. It then sets out an account of cumulative causation processes using this reflexive agent conception, modeling them as a two-part causal process, one part involving a linear causal relation and one part involving a circular causal relation. The paper compares the reflexive agent conception to the standard expected utility conception of economic agents, and argues that on a cumulative causation view of the world the completeness assumption essential to the standard view of rationality cannot be applied. The final discussion addresses the nature of the choice behavior of reflexive economic agents, using the thinking of Amartya Sen and Herbert Simon to frame how agents might approach choice in regard to each of the two different parts of cumulative causal processes, and closing with brief comments on behavioral economics’ understanding of reference dependence and position adjustment

    Inference, Explanation, and Asymmetry

    Get PDF
    Explanation is asymmetric: if A explains B, then B does not explain A. Tradition- ally, the asymmetry of explanation was thought to favor causal accounts of explanation over their rivals, such as those that take explanations to be inferences. In this paper, we develop a new inferential approach to explanation that outperforms causal approaches in accounting for the asymmetry of explanation

    Quantum causal models, faithfulness and retrocausality

    Full text link
    Wood and Spekkens (2015) argue that any causal model explaining the EPRB correlations and satisfying no-signalling must also violate the assumption that the model faithfully reproduces the statistical dependences and independences---a so-called "fine-tuning" of the causal parameters; this includes, in particular, retrocausal explanations of the EPRB correlations. I consider this analysis with a view to enumerating the possible responses an advocate of retrocausal explanations might propose. I focus on the response of N\"{a}ger (2015), who argues that the central ideas of causal explanations can be saved if one accepts the possibility of a stable fine-tuning of the causal parameters. I argue that, in light of this view, a violation of faithfulness does not necessarily rule out retrocausal explanations of the EPRB correlations, although it certainly constrains such explanations. I conclude by considering some possible consequences of this type of response for retrocausal explanations

    Imprecise Probability and Chance

    Get PDF
    Understanding probabilities as something other than point values (e.g., as intervals) has often been motivated by the need to find more realistic models for degree of belief, and in particular the idea that degree of belief should have an objective basis in “statistical knowledge of the world.” I offer here another motivation growing out of efforts to understand how chance evolves as a function of time. If the world is “chancy” in that there are non-trivial, objective, physical probabilities at the macro-level, then the chance of an event e that happens at a given time is e goes to one continuously or not is left open. Discontinuities in such chance trajectories can have surprising and troubling consequences for probabilistic analyses of causation and accounts of how events occur in time. This, coupled with the compelling evidence for quantum discontinuities in chance’s evolution, gives rise to a “(dis)continuity bind” with respect to chance probability trajectories. I argue that a viable option for circumventing the (dis)continuity bind is to understand the probabilities “imprecisely,” that is, as intervals rather than point values. I then develop and motivate an alternative kind of continuity appropriate for interval-valued chance probability trajectories

    Patterns, Information, and Causation

    Get PDF
    This paper articulates an account of causation as a collection of information-theoretic relationships between patterns instantiated in the causal nexus. I draw on Dennett’s account of real patterns to characterize potential causal relata as patterns with specific identification criteria and noise tolerance levels, and actual causal relata as those patterns instantiated at some spatiotemporal location in the rich causal nexus as originally developed by Salmon. I develop a representation framework using phase space to precisely characterize causal relata, including their degree of counterfactual robustness, causal profiles, causal connectivity, and privileged grain size. By doing so, I show how the philosophical notion of causation can be rendered in a format that is amenable for direct application of mathematical techniques from information theory such that the resulting informational measures are causal informational measures. This account provides a metaphysics of causation that supports interventionist semantics and causal modeling and discovery techniques

    Biological Information, Causality and Specificity - an Intimate Relationship

    Get PDF
    In this chapter we examine the relationship between biological information, the key biological concept of specificity, and recent philosophical work on causation. We begin by showing how talk of information in the molecular biosciences grew out of efforts to understand the sources of biological specificity. We then introduce the idea of ‘causal specificity’ from recent work on causation in philosophy, and our own, information theoretic measure of causal specificity. Biological specificity, we argue, is simple the causal specificity of certain biological processes. This, we suggest, means that causal relationships in biology are ‘informational’ relationships simply when they are highly specific relationships. Biological information can be identified with the storage, transmission and exercise of biological specificity. It has been argued that causal relationships should not be regarded as informational relationship unless they are ‘arbitrary’. We argue that, whilst arbitrariness is an important feature of many causal relationships in living systems, it should not be used in this way to delimit biological information. Finally, we argue that biological specificity, and hence biological information, is not confined to nucleic acids but distributed among a wide range of entities and processes

    Algorithms of causal inference for the analysis of effective connectivity among brain regions

    Get PDF
    In recent years, powerful general algorithms of causal inference have been developed. In particular, in the framework of Pearl’s causality, algorithms of inductive causation (IC and IC*) provide a procedure to determine which causal connections among nodes in a network can be inferred from empirical observations even in the presence of latent variables, indicating the limits of what can be learned without active manipulation of the system. These algorithms can in principle become important complements to established techniques such as Granger causality and Dynamic Causal Modeling (DCM) to analyze causal influences (effective connectivity) among brain regions. However, their application to dynamic processes has not been yet examined. Here we study how to apply these algorithms to time-varying signals such as electrophysiological or neuroimaging signals. We propose a new algorithm which combines the basic principles of the previous algorithms with Granger causality to obtain a representation of the causal relations suited to dynamic processes. Furthermore, we use graphical criteria to predict dynamic statistical dependencies between the signals from the causal structure. We show how some problems for causal inference from neural signals (e.g., measurement noise, hemodynamic responses, and time aggregation) can be understood in a general graphical approach. Focusing on the effect of spatial aggregation, we show that when causal inference is performed at a coarser scale than the one at which the neural sources interact, results strongly depend on the degree of integration of the neural sources aggregated in the signals, and thus characterize more the intra-areal properties than the interactions among regions. We finally discuss how the explicit consideration of latent processes contributes to understand Granger causality and DCM as well as to distinguish functional and effective connectivity

    Non-causal explanations in physics

    Get PDF
    corecore