9 research outputs found

    On the Claim that a Table-Lookup Program Could Pass the Turing Test

    Get PDF
    The claim has often been made that passing the Turing Test would not be sufficient to prove that a computer program was intelligent because a trivial program could do it, namely, the “Humongous-Table (HT) Program”, which simply looks up in a table what to say next. This claim is examined in detail. Three ground rules are argued for: (1) That the HT program must be exhaustive, and not be based on some vaguely imagined set of tricks. (2) That the HT program must not be created by some set of sentient beings enacting responses to all possible inputs. (3) That in the current state of cognitive science it must be an open possibility that a computational model of the human mind will be developed that accounts for at least its nonphenomenological properties. Given ground rule 3, the HT program could simply be an “optimized” version of some computational model of a mind, created via the automatic application of program-transformation rules [thus satisfying ground rule 2]. Therefore, whatever mental states one would be willing to impute to an ordinary computational model of the human psyche one should be willing to grant to the optimized version as well. Hence no one could dismiss out of hand the possibility that the HT program was intelligent. This conclusion is important because the Humongous-Table Program Argument is the only argument ever marshalled against the sufficiency of the Turing Test, if we exclude arguments that cognitive science is simply not possibl

    Decision-Making From the Animal Perspective: Bridging Ecology and Subjective Cognition

    Get PDF
    Organisms have evolved to trade priorities across various needs, such as growth, survival, and reproduction. In naturally complex environments this incurs high computational costs. Models exist for several types of decisions, e.g., optimal foraging or life history theory. However, most models ignore proximate complexities and infer simple rules specific to each context. They try to deduce what the organism must do, but do not provide a mechanistic explanation of how it implements decisions. We posit that the underlying cognitive machinery cannot be ignored. From the point of view of the animal, the fundamental problems are what are the best contexts to choose and which stimuli require a response to achieve a specific goal (e.g., homeostasis, survival, reproduction). This requires a cognitive machinery enabling the organism to make predictions about the future and behave autonomously. Our simulation framework includes three essential aspects: (a) the focus on the autonomous individual, (b) the need to limit and integrate information from the environment, and (c) the importance of goal-directed rather than purely stimulus-driven cognitive and behavioral control. The resulting models integrate cognition, decision-making, and behavior in the whole phenotype that may include the genome, physiology, hormonal system, perception, emotions, motivation, and cognition. We conclude that the fundamental state is the global organismic state that includes both physiology and the animal's subjective “mind”. The approach provides an avenue for evolutionary understanding of subjective phenomena and self-awareness as evolved mechanisms for adaptive decision-making in natural environments

    Rethinking Turing’s Test and the Philosophical Implications

    Get PDF
    © 2020, Springer Nature B.V. In the 70 years since Alan Turing’s ‘Computing Machinery and Intelligence’ appeared in Mind, there have been two widely-accepted interpretations of the Turing test: the canonical behaviourist interpretation and the rival inductive or epistemic interpretation. These readings are based on Turing’s Mind paper; few seem aware that Turing described two other versions of the imitation game. I have argued that both readings are inconsistent with Turing’s 1948 and 1952 statements about intelligence, and fail to explain the design of his game. I argue instead for a response-dependence interpretation (Proudfoot 2013). This interpretation has implications for Turing’s view of free will: I argue that Turing’s writings suggest a new form of free will compatibilism, which I call response-dependence compatibilism (Proudfoot 2017a). The philosophical implications of rethinking Turing’s test go yet further. It is assumed by numerous theorists that Turing anticipated the computational theory of mind. On the contrary, I argue, his remarks on intelligence and free will lead to a new objection to computationalism

    On the granting of moral standing to artificial intelligence: a pragmatic, empirically-informed, desire-based approach

    Get PDF
    Ever-increasingly complex AI technology is being introduced into society, with ever-more impressive capabilities. As AI tech advances, it will become harder to tell whether machines are relevantly different from human beings in terms of the moral consideration they are owed. This is a significant practical concern. As more advanced AIs become part of our daily lives, we could face moral dilemmas where we are forced to choose between harming a human, or harming one or several of these machines. Given these possibilities, we cannot withhold judgement about AI moral standing until we achieve logical certainty, but need guidance to make decisions. I will present a pragmatic framework that will enable us to have sufficient evidence for decision-making, even if it does not definitively prove which entities have moral standing. First, I defend adopting a welfarist moral theory, where having the capacity for well-being determines that a being has moral standing. I then argue that a desire-based theory of welfare is acceptable to a wide range of positions and should be adopted. It is therefore necessary to articulate a theory of desire, and I demonstrate by reference to discourse in ethics that a phenomenological conception of desire is most compatible with the way ethical theory has been discussed. From there, we need to establish a test for possessing the capacity for phenomenological desire. This can be accomplished by finding observed cases where a lack of specific morally-relevant phenomenal states inhibits the performance of a certain task in humans. If a machine can consistently exhibit the behaviour in question, we have evidence that it has the phenomenal states necessary for moral standing. With reference to recent experimental results, I present clear and testable criteria such that if an AI were to succeed at certain tasks, we would have a reason to treat it as though it did have moral standing, and demonstrate that modern-day AI has given no evidence as yet that it has the phenomenal experiences that would give it moral standing. The tasks in question are tests of moral and social aptitude. Success at these tests would not be certain proof of moral standing, but it would be sufficient to base our decisions on, which is the best we can hope for at the moment. Finally, I examine the practical consequences of these conclusions for our future actions. The use of this particular criterion has significant and interesting results that might change things significantly in terms of whether applications of this research are worth the cost and risks

    Questioning Turing test

    Get PDF
    The Turing Test (TT) is an experimental paradigm to test for intelligence, where an entity’s intelligence is inferred from its ability, during a text-based conversation, to be recognized as a human by the human judge. The advantage of this paradigm is that it encourages alternative versions of the test to be designed; and it can include any field of human endeavour. However, it has two major problems: (i) it can be passed by an entity that produces uncooperative but human-like responses (Artificial Stupidity); and (ii) it is not sensitive to how the entity produces the conversation (Blockhead). In light of these two problems, I propose a new version of the TT, the Questioning Turing Test (QTT). In the QTT, the task of the entity is not to hold a conversation, but to accomplish an enquiry with as few human-like questions as possible. The job of the human judge is to provide the answers and, like in the TT, to decide whether the entity is human or machine. The QTT has the advantage of parametrising the entity along two further dimensions in addition to ‘human-likeness’: ‘correctness’, evaluating if the entity accomplishes the enquiry; and ‘strategicness’, evaluating how well the entity carries out the enquiry, in terms of the number of questions asked – the fewer, the better. Moreover, in the experimental design of the QTT, the test is not the enquiry per se, but rather the comparison between the performances of humans and machines. The results gained from the QTT show that its experimental design minimises false positives and negatives; and avoids both Artificial Stupidity and Blockhead
    corecore