412 research outputs found

    The Annotation Game: On Turing (1950) on Computing, Machinery, and Intelligence

    Get PDF
    This quote/commented critique of Turing's classical paper suggests that Turing meant -- or should have meant -- the robotic version of the Turing Test (and not just the email version). Moreover, any dynamic system (that we design and understand) can be a candidate, not just a computational one. Turing also dismisses the other-minds problem and the mind/body problem too quickly. They are at the heart of both the problem he is addressing and the solution he is proposing

    Validation of Expert Systems: Personal Choice Expert -- A Flexible Employee Benefit System

    Get PDF
    A method for validating expert systems, based on psychological validation literature and Turing\u27s imitation game, is applied to a flexible benefits expert system. Expert system validation entails determining if a difference exists between expert and novice decisions (construct validity), if the system uses the same inputs and processes to make its decisions as experts (content validity), and if the system produces the same results as experts (criterionrelated validity). If these criteria are satisfied, then the system is indistinguishable from experts for its domain and satisfies Turing\u27s imitation game. The methods developed in this paper are applied to a human resource expert system, Personal Choice Expert (PCE), designed to help employees choose a benefits package in a flexible benefits system. Expert and novice recommendations are compared to those generated by PCE. PCE\u27s recommendations do not significantly differ from those given by experts. High inter-expert agreement exists for some benefit recommendations (e.g. Dental Care and Long-Term Disability) but not for others (e.g. Short-Term Disability and Life Insurance). Insights offered by this method are illustrated and examined

    The Turing Deception

    Full text link
    This research revisits the classic Turing test and compares recent large language models such as ChatGPT for their abilities to reproduce human-level comprehension and compelling text generation. Two task challenges -- summarization, and question answering -- prompt ChatGPT to produce original content (98-99%) from a single text entry and also sequential questions originally posed by Turing in 1950. We score the original and generated content against the OpenAI GPT-2 Output Detector from 2019, and establish multiple cases where the generated content proves original and undetectable (98%). The question of a machine fooling a human judge recedes in this work relative to the question of "how would one prove it?" The original contribution of the work presents a metric and simple grammatical set for understanding the writing mechanics of chatbots in evaluating their readability and statistical clarity, engagement, delivery, and overall quality. While Turing's original prose scores at least 14% below the machine-generated output, the question of whether an algorithm displays hints of Turing's truly original thoughts (the "Lovelace 2.0" test) remains unanswered and potentially unanswerable for now

    The Levels of Human Intelligence

    Get PDF

    Could There be a Turing Test for Qualia?

    Get PDF

    Computers and the Nature of Man: A Historian\u27s Perspective on Controversies about Artificial Intelligence

    Get PDF
    The purpose of the present paper is to provide a historical perspective on recent controversies, from Turing\u27s time on, about artificial intelligence, and to make clear that these are in fact controversies about the nature of man. First, I shall briefly review three recent controversies about artificial intelligence, controversies over whether computers can think and over whether people are no more than information-processing machines. These three controversies were each initiated by philosophers who, irrespective of what the programs of their time actually did, viewed with alarm the argument that if a machine can think, a thinking being is just a machine. I will then turn to the major business of this paper: to contrast two developments from within the field of AI which have been interpreted by some as successful steps toward simulating human thought, and also to contrast some reactions to that claimed success. Finally, we will look at some recent developments in the field of AI that suggest that the whole discussion about machine intelligence is at best premature and at worst irrelevant

    You Might be a Robot

    Get PDF
    As robots and artificial intelligence (Al) increase their influence over society, policymakers are increasingly regulating them. But to regulate these technologies, we first need to know what they are. And here we come to a problem. No one has been able to offer a decent definition of robots arid AI-not even experts. What\u27s more, technological advances make it harder and harder each day to tell people from robots and robots from dumb machines. We have already seen disastrous legal definitions written with one target in mind inadvertently affecting others. In fact, if you are reading this you are (probably) not a robot, but certain laws might already treat you as one. Definitional challenges like these aren\u27t exclusive to robots and Al. But today, all signs indicate we are approaching an inflection point. Whether it is citywide bans of robot sex brothels or nationwide efforts to crack down on ticket scalping bots, we are witnessing an explosion of interest in regulating robots, human enhancement technologies, and all things in between. And that, in turn, means that typological quandaries once confined to philosophy seminars can no longer be dismissed as academic. Want, for example, to crack down on foreign influence campaigns by regulating social media bots? Be careful not to define bot too broadly (like the Calfornia legislature recently did), or the supercomputer nestled in your pocket might just make you one. Want, instead, to promote traffic safety by regulating drivers? Be careful not to presume that only humans can drive (as our Federal Motor Vehicle Safety Standards do), or you may soon exclude the best drivers on the road. In this Article, we suggest that the problem isn\u27t simply that we haven\u27t hit upon the right definition. Instead, there may not be a right definition for the multifaceted, rapidly evolving technologies we call robots or AI. As we will demonstrate, even the most thoughtful of definitions risk being overbroad, underinclusive, or simply irrelevant in short order. Rather than trying in vain to find the perfect definition, we instead argue that policymakers should do as the great computer scientist, Alan Turing, did when confronted with the challenge of defining robots: embrace their ineffable nature. We offer several strategies to do so. First, whenever possible, laws should regulate behavior, not things (or as we put it, regulate verbs, not nouns). Second, where we must distinguish robots from other entities, the law should apply what we call Turing\u27s Razor, identifying robots on a case-by-case basis. Third, we offer six functional criteria for making these types of I know it when I see it determinations and argue that courts are generally better positioned than legislators to apply such standards. Finally, we argue that if we must have definitions rather than apply standards, they should be as short-term and contingent as possible. That, in turn, suggests that regulators-not legislators-should play the defining role
    • …
    corecore