940 research outputs found

    Doing, Feeling, Meaning And Explaining

    Get PDF
    It is “easy” to explain doing, “hard” to explain feeling. Turing has set the agenda for the easy explanation (though it will be a long time coming). I will try to explain why and how explaining feeling will not only be hard, but impossible. Explaining meaning will prove almost as hard because meaning is a hybrid of know-how and what it feels like to know how

    Minds, Brains and Turing

    Get PDF
    Turing set the agenda for (what would eventually be called) the cognitive sciences. He said, essentially, that cognition is as cognition does (or, more accurately, as cognition is capable of doing): Explain the causal basis of cognitive capacity and you’ve explained cognition. Test your explanation by designing a machine that can do everything a normal human cognizer can do – and do it so veridically that human cognizers cannot tell its performance apart from a real human cognizer’s – and you really cannot ask for anything more. Or can you? Neither Turing modelling nor any other kind of computational r dynamical modelling will explain how or why cognizers feel

    Alan Turing and the “hard” and “easy” problem of cognition: doing and feeling

    Get PDF
    The "easy" problem of cognitive science is explaining how and why we can do what we can do. The "hard" problem is explaining how and why we feel. Turing's methodology for cognitive science (the Turing Test) is based on doing: Design a model that can do anything a human can do, indistinguishably from a human, to a human, and you have explained cognition. Searle has shown that the successful model cannot be solely computational. Sensory-motor robotic capacities are necessary to ground some, at least, of the model's words, in what the robot can do with the things in the world that the words are about. But even grounding is not enough to guarantee that -- nor to explain how and why -- the model feels (if it does). That problem is much harder to solve (and perhaps insoluble)

    Zen and the Art of Explaining the Mind [Review of Shanahan M. (2010) Embodiment and the Inner Life: Cognition and Consciousness in the Space of Possible Minds. Oxford University Press.]

    No full text
    The “global workspace” model would explain our performance capacity if it could actually be shown to generate our performance capacity. (So far it is still just a promissory note.) That would solve the “easy” problem. But that still would not explain how and why it generates consciousness (if it does). That’s a rather harder problem

    What It Feels Like To Hear Voices: Fond Memories of Julian Jaynes

    No full text
    Julian Jaynes's profound humanitarian convictions not only prevented him from going to war, but would have prevented him from ever kicking a dog. Yet according to his theory, not only are language-less dogs unconscious, but so too were the speaking/hearing Greeks in the Bicameral Era, when they heard gods' voices telling them what to do rather than thinking for themselves. I argue that to be conscious is to be able to feel, and that all mammals (and probably lower vertebrates and invertebrates too) feel, hence are conscious. Julian Jaynes's brilliant analysis of our concepts of consciousness nevertheless keeps inspiring ever more inquiry and insights into the age-old mind/body problem and its relation to cognition and language

    On the Matter of Robot Minds

    Get PDF
    The view that phenomenally conscious robots are on the horizon often rests on a certain philosophical view about consciousness, one we call “nomological behaviorism.” The view entails that, as a matter of nomological necessity, if a robot had exactly the same patterns of dispositions to peripheral behavior as a phenomenally conscious being, then the robot would be phenomenally conscious; indeed it would have all and only the states of phenomenal consciousness that the phenomenally conscious being in question has. We experimentally investigate whether the folk think that certain (hypothetical) robots made of silicon and steel would have the same conscious states as certain familiar biological beings with the same patterns of dispositions to peripheral behavior as the robots. Our findings provide evidence that the folk largely reject the view that silicon-based robots would have the sensations that they, the folk, attribute to the biological beings in question

    Autonomous Weapons and the Nature of Law and Morality: How Rule-of-Law-Values Require Automation of the Rule of Law

    Get PDF
    While Autonomous Weapons Systems have obvious military advantages, there are prima facie moral objections to using them. By way of general reply to these objections, I point out similarities between the structure of law and morality on the one hand and of automata on the other. I argue that these, plus the fact that automata can be designed to lack the biases and other failings of humans, require us to automate the formulation, administration, and enforcement of law as much as possible, including the elements of law and morality that are operated by combatants in war. I suggest that, ethically speaking, deploying a legally competent robot in some legally regulated realm is not much different from deploying a more or less well-armed, vulnerable, obedient, or morally discerning soldier or general into battle, a police officer onto patrol, or a lawyer or judge into a trial. All feature automaticity in the sense of deputation to an agent we do not then directly control. Such relations are well understood and well-regulated in morality and law; so there is not much challenging philosophically in having robots be some of these agents — excepting the implications of the limits of robot technology at a given time for responsible deputation. I then consider this proposal in light of the differences between two conceptions of law. These are distinguished by whether each conception sees law as unambiguous rules inherently uncontroversial in each application; and I consider the prospects for robotizing law on each. Likewise for the prospects of robotizing moral theorizing and moral decision-making. Finally I identify certain elements of law and morality, noted by the philosopher Immanuel Kant, which robots can participate in only upon being able to set ends and emotionally invest in their attainment. One conclusion is that while affectless autonomous devices might be fit to rule us, they would not be fit to vote with us. For voting is a process for summing felt preferences, and affectless devices would have none to weigh into the sum. Since they don't care which outcomes obtain, they don't get to vote on which ones to bring about

    What It Feels Like To Hear Voices: Fond Memories of Julian Jaynes

    Full text link
    Julian Jaynes's profound humanitarian convictions not only prevented him from going to war, but would have prevented him from ever kicking a dog. Yet according to his theory, not only are language-less dogs unconscious, but so too were the speaking/hearing Greeks in the Bicameral Era, when they heard gods' voices telling them what to do rather than thinking for themselves. I argue that to be conscious is to be able to feel, and that all mammals (and probably lower vertebrates and invertebrates too) feel, hence are conscious. Julian Jaynes's brilliant analysis of our concepts of consciousness nevertheless keeps inspiring ever more inquiry and insights into the age-old mind/body problem and its relation to cognition and language.Comment: 16 page

    Artificial Intelligence Through the Eyes of the Public

    Get PDF
    Artificial Intelligence is becoming a popular field in computer science. In this report we explored its history, major accomplishments and the visions of its creators. We looked at how Artificial Intelligence experts influence reporting and engineered a survey to gauge public opinion. We also examined expert predictions concerning the future of the field as well as media coverage of its recent accomplishments. These results were then used to explore the links between expert opinion, public opinion and media coverage
    • …
    corecore