16 research outputs found

    Alan Turing and the “hard” and “easy” problem of cognition: doing and feeling

    Get PDF
    The "easy" problem of cognitive science is explaining how and why we can do what we can do. The "hard" problem is explaining how and why we feel. Turing's methodology for cognitive science (the Turing Test) is based on doing: Design a model that can do anything a human can do, indistinguishably from a human, to a human, and you have explained cognition. Searle has shown that the successful model cannot be solely computational. Sensory-motor robotic capacities are necessary to ground some, at least, of the model's words, in what the robot can do with the things in the world that the words are about. But even grounding is not enough to guarantee that -- nor to explain how and why -- the model feels (if it does). That problem is much harder to solve (and perhaps insoluble)

    Zen and the Art of Explaining the Mind [Review of Shanahan M. (2010) Embodiment and the Inner Life: Cognition and Consciousness in the Space of Possible Minds. Oxford University Press.]

    No full text
    The “global workspace” model would explain our performance capacity if it could actually be shown to generate our performance capacity. (So far it is still just a promissory note.) That would solve the “easy” problem. But that still would not explain how and why it generates consciousness (if it does). That’s a rather harder problem

    Minds, Brains and Turing

    Get PDF
    Turing set the agenda for (what would eventually be called) the cognitive sciences. He said, essentially, that cognition is as cognition does (or, more accurately, as cognition is capable of doing): Explain the causal basis of cognitive capacity and you’ve explained cognition. Test your explanation by designing a machine that can do everything a normal human cognizer can do – and do it so veridically that human cognizers cannot tell its performance apart from a real human cognizer’s – and you really cannot ask for anything more. Or can you? Neither Turing modelling nor any other kind of computational r dynamical modelling will explain how or why cognizers feel

    Doing, Feeling, Meaning And Explaining

    Get PDF
    It is “easy” to explain doing, “hard” to explain feeling. Turing has set the agenda for the easy explanation (though it will be a long time coming). I will try to explain why and how explaining feeling will not only be hard, but impossible. Explaining meaning will prove almost as hard because meaning is a hybrid of know-how and what it feels like to know how

    Which symbol grounding problem should we try to solve?

    Get PDF
    Floridi and Taddeo propose a condition of “zero semantic commitment” for solutions to the grounding problem, and a solution to it. I argue briefly that their condition cannot be fulfilled, not even by their own solution. After a look at Luc Steels' very different competing suggestion, I suggest that we need to re-think what the problem is and what role the ‘goals’ in a system play in formulating the problem. On the basis of a proper understanding of computing, I come to the conclusion that the only sensible ground-ing problem is how we can explain and re-produce the behavioral ability and function of meaning in artificial computational agent

    Metrics and benchmarks in human-robot interaction: Recent advances in cognitive robotics

    Get PDF
    International audienceRobots are having an important growing role in human social life, which requires them to be able to behave appropriately to the context of interaction so as to create a successful long-term human-robot relationship. A major challenge in developing intelligent systems , which could enhance the interactive abilities of robots, is defining clear metrics and benchmarks for the different aspects of human-robot interaction, like human and robot skills and performances, which could facilitate comparing between systems and avoid application-biased evaluations based on particular measures. The point of evaluating robotic systems through metrics and benchmarks, in addition to some recent frameworks and technologies that could endow robots with advanced cognitive and communicative abilities, are discussed in this technical report that covers the outcome of our recent workshop on current advances in cognitive robotics: Towards Intelligent Social Robots-Current Advances in Cognitive Robotics, in conjunction with the 15th IEEE-RAS Humanoids Conference-Seoul-South Korea-2015 (https://intelligent-robots-ws.ensta-paristech.fr/). Additionally, a summary of an interactive discussion session between the workshop participants and the invited speakers about different issues related to cognitive robotics research is reported

    From knowing how to knowing that: Acquiring categories by word of mouth

    Get PDF
    ABSTRACT: Nature is only interested in know-how, not "know-that": Foraging, feeding, fleeing, fledging, etc. So if know-how were all we had, then naturalizing epistemology would be easy (but neither epistemology, nor even language would have fledged). So is it enough just to add that knowing facts and formulas is part of the cognitive competence subserving our know-how? The answer may be a bit subtler than that, because the evolution of sociality and language have themselves "commodified" knowledge, so that acquiring a fact can be as much of an adaptive imperative as acquiring a fruit. But there is a bootstrapping problem, getting here from there: Acquiring facts cannot become like acquiring fruit until we have language. So it's down to the origins and adaptive value of language. Here is a hypothesis: Categorization is, at bottom, know-how: It's knowing what's the right thing to do with the right kind of thing (what to feed, flee or fledge, and what not) in order to survive, reproduce, and beat the competition. But if categories are based on our practical know-how, then the ones we already have can also be named (another case of know-how). And if categories can be named, then still other categories (that you have but I haven't, yet) can be described, even defined (for me, by you), by stringing those names into propositions with truth values. This is the capacity that sets our own species apart from all others: Every species that can learn can acquire categories by trial and error from direct sensorimotor experience, detecting the invariant sensorimotor features and rules that reliably distinguish the category members from the nonmembers. But only our species can also acquire categories from hearsay. And that not only opens up a vast wealth of potential categories, all the way from the practical to the platonic: more important, making all those invariant features and rules explicit and communicable saves us a lot of time, effort and risk in acquiring our adaptive know-how --enough to have radically altered the brains of our ancestors at least 100,000 years ago, and turned them into us. It also made possible that form of distributed, collaborative, collective cognition we call culture. Philosophers have long worried about the origin of knowledge: What do we know, and how do we know it? Knowing-How vs. Knowing-That. What we know consists of two kinds of things : (1) Knowing-How, which is the things we know how to do and (2) Knowing-That, which is the things that we believe to be true (when they are indeed true). Strictly speaking, knowing-that is itself just a special case of knowing-how, in the sense that we can state verbally the propositions that we take to be true and we can also state that they are true. Being able to do that is itself a form of know-how. If you reply that underlying that special verbal know-how is something further --say, that it also includes some information that we must possess --rather than merely consisting of our ability to state something --then it has to be pointed out, symmetrically, that information has to be possessed in order to have ordinary know-how too: It's just that that information is (usually) not as explicit when it underlies our ability to do something as it is when it is formulated as a proposition, and when the something we need to do is to state (and perhaps justify) that proposition

    O LIMITE QUALITATIVO DE MODELOS QUANTITATIVOS

    Get PDF
    O presente artigo discute a limitação qualitativa de modelos matemĂĄticos e computacionais. Por “qualitativo” entende-se nĂŁo uma descrição linguĂ­stica fuzzy ou sem o uso de matemĂĄtica, mas sim a vivĂȘncia de qualidades subjetivas e a extensĂŁo deste conceito para a materialidade das coisas, em oposição Ă  suas relaçÔes estruturais. Explora-se didaticamente o “espectro funcionalista”, em que diferentes posiçÔes filosĂłficas a respeito da natureza da consciĂȘncia sĂŁo apresentadas. Esta anĂĄlise opĂ”e, de um lado, as visĂ”es comportamentalistas, funcionalistas e mecanicistas, e de outro, as concepçÔes mentalistas, substancialistas e qualitatistas. Mapeiam-se assim diferentes respostas Ă  pergunta de se uma mĂĄquina pode ser consciente.
    corecore