16 research outputs found

    Examining the Continuity between Life and Mind: Is There a Continuity between Autopoietic Intentionality and Representationality?

    Get PDF
    A weak version of the life-mind continuity thesis entails that every living system also has a basic mind (with a non-representational form of intentionality). The strong version entails that the same concepts that are sufficient to explain basic minds (with non-representational states) are also central to understanding non-basic minds (with representational states). We argue that recent work on the free energy principle supports the following claims with respect to the life-mind continuity thesis: (i) there is a strong continuity between life and mind; (ii) all living systems can be described as if they had representational states; (iii) the ’as-if representationality’ entailed by the free energy principle is central to understanding both basic forms of intentionality and intentionality in non-basic minds. In addition to this, we argue that the free energy principle also renders realism about computation and representation compatible with a strong life-mind continuity thesis (although the free energy principle does not entail computational and representational realism). In particular, we show how representationality proper can be grounded in ’as-if representationality’

    How (and Why) to Think that the Brain is Literally a Computer

    Get PDF
    The relationship between brains and computers is often taken to be merely metaphorical. However, genuine computational systems can be implemented in virtually any media; thus, one can take seriously the view that brains literally compute. But without empirical criteria for what makes a physical system genuinely a computational one, computation remains a matter of perspective, especially for natural systems (e.g., brains) that were not explicitly designed and engineered to be computers. Considerations from real examples of physical computers—both analog and digital, contemporary and historical—make clear what those empirical criteria must be. Finally, applying those criteria to the brain shows how we can view the brain as a computer (probably an analog one at that), which, in turn, illuminates how that claim is both informative and falsifiable

    Contents, vehicles, and complex data analysis in neuroscience

    Get PDF
    The notion of representation in neuroscience has largely been predicated on localizing the components of computational processes that explain cognitive function. On this view, which I call “algorithmic homuncularism,” individual, spatially and temporally distinct parts of the brain serve as vehicles for distinct contents, and the causal relationships between them implement the transformations specified by an algorithm. This view has a widespread influence in philosophy and cognitive neuroscience, and has recently been ably articulated and defended by Shea. Still, I am skeptical about algorithmic homuncularism, and I argue against it by focusing on recent methods for complex data analysis in systems neuroscience. I claim that analyses such as principle components analysis and linear discriminant analysis prevent individuating vehicles as algorithmic homuncularism recommends. Rather, each individual part contributes to a global state space, trajectories of which vary with important task parameters. I argue that, while homuncularism is false, this view still supports a kind of “vehicle realism,” and I apply this view to debates about the explanatory role of representation

    Two New Doubts about Simulation Arguments

    Get PDF
    Various theorists contend that we may live in a computer simulation. David Chalmers in turn argues that the simulation hypothesis is a metaphysical hypothesis about the nature of our reality, rather than a sceptical scenario. We use recent work on consciousness to motivate new doubts about both sets of arguments. First, we argue that if either panpsychism or panqualityism is true, then the only way to live in a simulation may be as brains-in-vats, in which case it is unlikely that we live in a simulation. We then argue that if panpsychism or panqualityism is true, then viable simulation hypotheses are substantially sceptical scenarios. We conclude that the nature of consciousness has wide-ranging implications for simulation arguments

    Is Intelligence Non-Computational Dynamical Coupling?

    Get PDF
    Is the brain really a computer? In particular, is our intelligence a computational achievement: is it because our brains are computers that we get on in the world as well as we do? In this paper I will evaluate an ambitious new argument to the contrary, developed in Landgrebe and Smith (2021a, 2022). Landgrebe and Smith begin with the fact that many dynamical systems in the world are difficult or impossible to model accurately (inter alia, because it is intractable to find exact solutions to the differential equations that describe them—meaning we have to approximate—but at the same time they are such that small differences in starting conditions lead to big differences in final conditions, thwarting accurate approximation). Yet we manage to survive and thrive in a world full of such systems. Landgrebe and Smith argue from these premises that it is not because our brains are computers that we get on as well as we do: instead it is because of the various ways that we dynamically couple with such systems, these couplings themselves impossible to model well enough to emulate in silico. Landgrebe and Smith thus defend a dynamical systems model in the lineage of Gibson (1979), Van Gelder (1995), and Thompson (2007), though their focus is on decisively refuting the computationalist alternatives rather than developing the positive account. Here I will defend the claim that human intelligence is genuinely computational (and that whole brain emulation and others forms of AGI may be possible) against this argument

    Panpsychism and AI consciousness

    Get PDF

    Significance of neural noise

    Get PDF

    The Physicality of Representation

    Get PDF
    Representation is typically taken to be importantly separate from its physical implementation. This is exemplified in Marr's three-level framework, widely cited and often adopted in neuroscience. However, the separation between representation and physical implementation is not a necessary feature of information-processing systems. In particular, when it comes to analog computational systems, Marr's representational/algorithmic level and implementational level collapse into a single level. Insofar as analog computation is a better way of understanding neural computation than other notions, Marr's three-level framework must then be amended into a two-level framework. However, far from being a problem or limitation, this sheds lights on how to understand physical media as being representational, but without a separate, medium-independent representational level

    Rethinking Turing’s Test and the Philosophical Implications

    Get PDF
    © 2020, Springer Nature B.V. In the 70 years since Alan Turing’s ‘Computing Machinery and Intelligence’ appeared in Mind, there have been two widely-accepted interpretations of the Turing test: the canonical behaviourist interpretation and the rival inductive or epistemic interpretation. These readings are based on Turing’s Mind paper; few seem aware that Turing described two other versions of the imitation game. I have argued that both readings are inconsistent with Turing’s 1948 and 1952 statements about intelligence, and fail to explain the design of his game. I argue instead for a response-dependence interpretation (Proudfoot 2013). This interpretation has implications for Turing’s view of free will: I argue that Turing’s writings suggest a new form of free will compatibilism, which I call response-dependence compatibilism (Proudfoot 2017a). The philosophical implications of rethinking Turing’s test go yet further. It is assumed by numerous theorists that Turing anticipated the computational theory of mind. On the contrary, I argue, his remarks on intelligence and free will lead to a new objection to computationalism
    corecore