14,701 research outputs found

    Why Philosophers Should Care About Computational Complexity

    Get PDF
    One might think that, once we know something is computable, how efficiently it can be computed is a practical question with little further philosophical importance. In this essay, I offer a detailed case that one would be wrong. In particular, I argue that computational complexity theory---the field that studies the resources (such as time, space, and randomness) needed to solve computational problems---leads to new perspectives on the nature of mathematical knowledge, the strong AI debate, computationalism, the problem of logical omniscience, Hume's problem of induction, Goodman's grue riddle, the foundations of quantum mechanics, economic rationality, closed timelike curves, and several other topics of philosophical interest. I end by discussing aspects of complexity theory itself that could benefit from philosophical analysis.Comment: 58 pages, to appear in "Computability: G\"odel, Turing, Church, and beyond," MIT Press, 2012. Some minor clarifications and corrections; new references adde

    Computational complexity in the philosophy of mind: unconventional methods to solve the problem of logical omniscience

    Get PDF
    The philosophy of mind is traditionally concerned with the study of mental processes, language, the representation of knowledge and the relation of the mind shares with the body; computational complexity theory is related to the classification of computationally solvable problems (be it via execution time, storage requirements, etc...). While there are well-established links between computer science in general & the philosophy of mind, many possible solutions to traditional problems in the philosophy of mind have not yet been analyzed from the more specific lens of computational complexity theory. In his paper "Why Philosophers Should Care about Computational Complexity", Scott Aaronson argues that many conventional theories of epistemology & mind implicitly make the presupposition of omniscience (by supposing that knowing base facts means a knower necessarily understands derivative facts) - he proposes that computational complexity theory could explain why this is not the case. In this paper, I argue for a theory of mental representation & epistemology compatible with Aaronson's observations on complexity theory, overcoming that presupposition of omniscience

    On the computational complexity of ethics: moral tractability for minds and machines

    Get PDF
    Why should moral philosophers, moral psychologists, and machine ethicists care about computational complexity? Debates on whether artificial intelligence (AI) can or should be used to solve problems in ethical domains have mainly been driven by what AI can or cannot do in terms of human capacities. In this paper, we tackle the problem from the other end by exploring what kind of moral machines are possible based on what computational systems can or cannot do. To do so, we analyze normative ethics through the lens of computational complexity. First, we introduce computational complexity for the uninitiated reader and discuss how the complexity of ethical problems can be framed within Marrā€™s three levels of analysis. We then study a range of ethical problems based on consequentialism, deontology, and virtue ethics, with the aim of elucidating the complexity associated with the problems themselves (e.g., due to combinatorics, uncertainty, strategic dynamics), the computational methods employed (e.g., probability, logic, learning), and the available resources (e.g., time, knowledge, learning). The results indicate that most problems the normative frameworks pose lead to tractability issues in every category analyzed. Our investigation also provides several insights about the computational nature of normative ethics, including the differences between rule- and outcome-based moral strategies, and the implementation-variance with regard to moral resources. We then discuss the consequences complexity results have for the prospect of moral machines in virtue of the trade-off between optimality and efficiency. Finally, we elucidate how computational complexity can be used to inform both philosophical and cognitive-psychological research on human morality by advancing the moral tractability thesis

    Can biological quantum networks solve NP-hard problems?

    Full text link
    There is a widespread view that the human brain is so complex that it cannot be efficiently simulated by universal Turing machines. During the last decades the question has therefore been raised whether we need to consider quantum effects to explain the imagined cognitive power of a conscious mind. This paper presents a personal view of several fields of philosophy and computational neurobiology in an attempt to suggest a realistic picture of how the brain might work as a basis for perception, consciousness and cognition. The purpose is to be able to identify and evaluate instances where quantum effects might play a significant role in cognitive processes. Not surprisingly, the conclusion is that quantum-enhanced cognition and intelligence are very unlikely to be found in biological brains. Quantum effects may certainly influence the functionality of various components and signalling pathways at the molecular level in the brain network, like ion ports, synapses, sensors, and enzymes. This might evidently influence the functionality of some nodes and perhaps even the overall intelligence of the brain network, but hardly give it any dramatically enhanced functionality. So, the conclusion is that biological quantum networks can only approximately solve small instances of NP-hard problems. On the other hand, artificial intelligence and machine learning implemented in complex dynamical systems based on genuine quantum networks can certainly be expected to show enhanced performance and quantum advantage compared with classical networks. Nevertheless, even quantum networks can only be expected to efficiently solve NP-hard problems approximately. In the end it is a question of precision - Nature is approximate.Comment: 38 page

    Against simplicity and cognitive individualism: Nathaniel T. Wilcox

    Get PDF
    Neuroeconomics illustrates our deepening descent into the details of individual cognition. This descent is guided by the implicit assumption that ā€œindividual humanā€ is the important ā€œagentā€ of neoclassical economics. I argue here that this assumption is neither obviously correct, nor of primary importance to human economies. In particular I suggest that the main genius of the human species lies with its ability to distribute cognition across individuals, and to incrementally accumulate physical and social cognitive artifacts that largely obviate the innate biological limitations of individuals. If this is largely why our economies grow, then we should be much more interested in distributed cognition in human groups, and correspondingly less interested in individual cognition. We should also be much more interested in the cultural accumulation of cognitive artefacts: computational devices and media, social structures and economic institutions

    ANALYTIC AND CONTINENTAL PHILOSOPHY, SCIENCE, AND GLOBAL PHILOSOPHY

    Get PDF
    Although there is no consensus on what distinguishes analytic from Continental philosophy, I focus in this paper on one source of disagreement that seems to run fairly deep in dividing these traditions in recent times, namely, disagreement about the relation of natural science to philosophy. I consider some of the exchanges about science that have taken place between analytic and Continental philosophers, especially in connection with the philosophy of mind. In discussing the relation of natural science to philosophy I employ an analysis of the origins of natural science that has been developed by a number of Continental philosophers. Awareness and investigation of interactions between analytic and Continental philosophers on science, it is argued, might help to foster further constructive engagement between the traditions. In the last section of the paper I briefly discuss the place of natural science in relation to global philosophy on the basis of what we can learn from analytic/Continental exchanges

    Street smarts

    Get PDF
    A pluralistic approach to folk psychology must countenance the evaluative, regulatory, predictive, and explanatory roles played by attributions of intelligence in social practices across cultures. Building off of the work of the psychologist Robert Sternberg and the philosophers Gilbert Ryle and Daniel Dennett, I argue that a relativistic interpretivism best accounts for the many varieties of intelligence that emerge from folk discourse. To be intelligent is to be comparatively good at solving intellectual problems that an interpreter deems worth solving

    Origin Gaps and the Eternal Sunshine of the Second-Order Pendulum

    Full text link
    The rich experiences of an intentional, goal-oriented life emerge, in an unpredictable fashion, from the basic laws of physics. Here I argue that this unpredictability is no mirage: there are true gaps between life and non-life, mind and mindlessness, and even between functional societies and groups of Hobbesian individuals. These gaps, I suggest, emerge from the mathematics of self-reference, and the logical barriers to prediction that self-referring systems present. Still, a mathematical truth does not imply a physical one: the universe need not have made self-reference possible. It did, and the question then is how. In the second half of this essay, I show how a basic move in physics, known as renormalization, transforms the "forgetful" second-order equations of fundamental physics into a rich, self-referential world that makes possible the major transitions we care so much about. While the universe runs in assembly code, the coarse-grained version runs in LISP, and it is from that the world of aim and intention grows.Comment: FQXI Prize Essay 2017. 18 pages, including afterword on Ostrogradsky's Theorem and an exchange with John Bova, Dresden Craig, and Paul Livingsto

    The imperfect observer: Mind, machines, and materialism in the 21st century

    Get PDF
    The dualist / materialist debates about the nature of consciousness are based on the assumption that an entirely physical universe must ultimately be observable by humans (with infinitely advanced tools). Thus the dualists claim that anything unobservable must be non-physical, while the materialists argue that in theory nothing is unobservable. However, there may be fundamental limitations in the power of human observation, no matter how well aided, that greatly curtail our ability to know and observe even a fully physical universe. This paper presents arguments to support the model of an inherently limited observer and explores the consequences of this view
    • ā€¦
    corecore