3 research outputs found

    InteligĂȘncia artificial e consciĂȘncia fenomĂ©nica : quĂŁo perto estamos de mĂĄquinas conscientes?

    Get PDF
    A consciĂȘncia Ă© um fenĂłmeno mal compreendido de sistemas biolĂłgicos com uma certa complexidade, e hĂĄ quem sustente a possibilidade de poder vir a ser uma propriedade de determinados organismos artificiais. Alguns autores acreditam que pode emergir do substracto fĂ­sico, ser uma ilusĂŁo ou um mero epifenĂłmeno pelo que, teoricamente, nada obsta a que possa ser instanciada, uma vez compreendidos os mecanismos que a fazem surgir. Para outros, entre os quais me incluo, a instanciação da consciĂȘncia em organismos artificiais nĂŁo Ă© possĂ­vel, uns porque a consideram irredutĂ­vel ao fĂ­sico, outros porque a situam em planos transcendentais. Neste trabalho, para alĂ©m de consideraçÔes gerais destinadas a situar o assunto, procuro abordar particularmente o conceito de consciĂȘncia fenomĂ©nica, o chamado “problema duro”, o problema de saber como Ă© que certas actividades neuronais aparecem internamente como experiĂȘncia subjectiva, como qualia, Ă  luz de diferentes teorias oriundas de vĂĄrios campos da ciĂȘncia. Para alĂ©m das referĂȘncias Ă s principais teorias metafĂ­sicas, discuto com algum pormenor as mais relevantes teorias especĂ­ficas da consciĂȘncia, analiso modelos e implementaçÔes propostas pela InteligĂȘncia Artificial (IA) e pela ConsciĂȘncia Artificial (CA)1, discuto atĂ© que ponto se avançou, ou nĂŁo, na simulação e na instanciação da consciĂȘncia em organismos artificiais, e quais as principais objecçÔes Ă  sua instanciação e caracterização. Por fim sĂŁo extraĂ­das algumas conclusĂ”es que tentam responder Ă  questĂŁo suscitada no tĂ­tulo, partindo da ideia de que a consciĂȘncia fenomĂ©nica nĂŁo Ă© processamento de informação, de uma intuição a priori2 de que nĂŁo Ă© uma propriedade emergente do substracto fĂ­sico, e que talvez sĂł seja possĂ­vel em determinados organismos biolĂłgicos.Consciousness is a poorly understood phenomenon of both more and less complex biological systems, and there are those who defend the possibility that it may also become a property of certain artificial organisms. Some authors believe that consciousness can arise from the physical substrate, be an illusion or a mere epiphenomenon, so theoretically nothing prevents it from being implemented once the mechanisms that make it emerge are understood. For others, myself included, the implementation of consciousness in artificial organisms is not possible, either because it is considered to be irreducible to the physical, or because it is situated in transcendental order. In this work, in addition to general considerations aimed at placing the subject, I particularly seek to address the concept of phenomenic consciousness, the so-called "hard problem," the problem of how, in the light of different theories from various fields of science, certain neural activities appear internally as subjective experience, or qualia. In addition to references to the main metaphysical theories, I discuss in some detail the most relevant specific theories of consciousness, analyze models and implementations proposed by Artificial Intelligence and Artificial Consciousness, and discuss to what extent progress has been made in the simulation and instantiation of consciousness in artificial organisms, and what are the main objections to its instantiation and characterization. Finally, some conclusions are drawn in an attempt to answer the question raised in the title, starting from the idea that phenomenic consciousness is not a processing of information, and from an a priori intuition that it is not an emergent property of the physical substrate, and that perhaps it is only possible in certain biological organisms

    Concepts enacted: confronting the obstacles and paradoxes inherent in pursuing a scientific understanding of the building blocks of human thought

    Get PDF
    This thesis confronts a fundamental shortcoming in cognitive science research: a failure to be explicit about the theory of concepts underlying cognitive science research and a resulting failure to justify that theory philosophically or otherwise. It demonstrates how most contemporary debates over theories of concepts divide over whether concepts are best understood as (mental) representations or as non-representational abilities. It concludes that there can be no single correct ontology, and that both perspectives are logically necessary. It details three critical distinctions that are frequently neglected: between concepts as we possess and employ them non-reflectively, and concepts as we reflect upon them; between the private (subjective) and public (inter-subjective) aspects of concepts; and between concepts as approached from a realist versus anti-realist perspective. Metaphysical starting points fundamentally shape conclusions. The main contribution of this thesis is a pragmatic, meticulously detailed, and distinctive account of concepts in terms of their essential nature, core properties, and context of application. This is done within the framework of Peter GĂ€rdenfors’ conceptual spaces theory of concepts, which is offered as a bridging account, best able to tie existing theories together into one framework. A set of extensions to conceptual spaces theory, called the unified conceptual space theory, are offered as a means of pushing GĂ€rdenfors’ theory in a more algorithmically amenable and empirically testable direction. The unified conceptual space theory describes how all of an agent’s many different conceptual spaces, as described by GĂ€rdenfors, are mapped together into one unified space of spaces, and how an analogous process happens at the societal level. The unified conceptual space theory is put to work offering a distinctive account of the co-emergence of concepts and experience out of a circularly causal process. Finally, an experimental application of the theory is presented, in the form of a simple computer program

    Robotic Specification of the Non-Conceptual Content of Visual Experience

    No full text
    Standard, linguistic means of specifying the content of mental states do so by expressing the content in question. Such means fail when it comes to capturing non-conceptual aspects of visual experience, since no linguistic expression can adequately express such content. One alternative is to use depictions: images that either evoke (reproduce in the recipient) or refer to the content of the experience. Practical considerations concerning the generation and integration of such depictions argue in favour of a synthetic approach: the generation of depictions through the use of an embodied, perceiving and acting agent, either virtual or real. This paper takes the first steps in an investigation as to how one might use a robot to specify the non-conceptual content of the visual experience of an (hypothetical) organism that the robot models
    corecore