118,343 research outputs found
Learning, Social Intelligence and the Turing Test - why an "out-of-the-box" Turing Machine will not pass the Turing Test
The Turing Test (TT) checks for human intelligence, rather than any putative
general intelligence. It involves repeated interaction requiring learning in
the form of adaption to the human conversation partner. It is a macro-level
post-hoc test in contrast to the definition of a Turing Machine (TM), which is
a prior micro-level definition. This raises the question of whether learning is
just another computational process, i.e. can be implemented as a TM. Here we
argue that learning or adaption is fundamentally different from computation,
though it does involve processes that can be seen as computations. To
illustrate this difference we compare (a) designing a TM and (b) learning a TM,
defining them for the purpose of the argument. We show that there is a
well-defined sequence of problems which are not effectively designable but are
learnable, in the form of the bounded halting problem. Some characteristics of
human intelligence are reviewed including it's: interactive nature, learning
abilities, imitative tendencies, linguistic ability and context-dependency. A
story that explains some of these is the Social Intelligence Hypothesis. If
this is broadly correct, this points to the necessity of a considerable period
of acculturation (social learning in context) if an artificial intelligence is
to pass the TT. Whilst it is always possible to 'compile' the results of
learning into a TM, this would not be a designed TM and would not be able to
continually adapt (pass future TTs). We conclude three things, namely that: a
purely "designed" TM will never pass the TT; that there is no such thing as a
general intelligence since it necessary involves learning; and that
learning/adaption and computation should be clearly distinguished.Comment: 10 pages, invited talk at Turing Centenary Conference CiE 2012,
special session on "The Turing Test and Thinking Machines
Not all the bots are created equal:the Ordering Turing Test for the labelling of bots in MMORPGs
This article contributes to the research on bots in Social Media. It takes as its starting point an emerging perspective which proposes that we should abandon the investigation of the Turing Test and the functional aspects of bots in favor of studying the authentic and cooperative relationship between humans and bots. Contrary to this view, this article argues that Turing Tests are one of the ways in which authentic relationships between humans and bots take place. To understand this, this article introduces the concept of Ordering Turing Tests: these are sort of Turing Tests proposed by social actors for purposes of achieving social order when bots produce deviant behavior. An Ordering Turing Test is method for labeling deviance, whereby social actors can use this test to tell apart rule-abiding humans and rule-breaking bots. Using examples from Massively Multiplayer Online Role-Playing Games, this article illustrates how Ordering Turing Tests are proposed and justified by players and service providers. Data for the research comes from scientific literature on Machine Learning proposed for the identification of bots and from game forums and other player produced paratexts from the case study of the game Runescape
The Turing Test and the Zombie Argument
In this paper I shall try to put some implications concerning the Turing's test and the so-called
Zombie arguments into the context of philosophy of mind. My intention is not to compose a review
of relevant concepts, but to discuss central problems, which originate from the Turing's test - as a
paradigm of computational theory of mind - with the arguments, which refute sustainability of this
thesis.
In the first section (Section I), I expose the basic computationalist presuppositions; by
examining the premises of the Turing Test (TT) I argue that the TT, as a functionalist paradigm
concept, underlies the computational theory of mind. I treat computationalism as a thesis that
defines the human cognitive system as a physical, symbolic and semantic system, in such a
manner that the description of its physical states is isomorphic with the description of its symbolic
conditions, so that this isomorphism is semantically interpretable. In the second section (Section
II), I discuss the Zombie arguments, and the epistemological-modal problems connected with them,
which refute sustainability of computationalism. The proponents of the Zombie arguments build their
attack on the computationalism on the basis of thought experiments with creatures behaviorally,
functionally and physically indistinguishable from human beings, though these creatures do not
have phenomenal experiences. According to the consequences of these thought experiments - if
zombies are possible, then, the computationalism doesn't offer a satisfying explanation of
consciousness. I compare my thesis from Section 1, with recent versions of Zombie arguments,
which claim that computationalism fails to explain qualitative phenomenal experience. I conclude
that despite the weaknesses of computationalism, which are made obvious by zombie-arguments,
these arguments are not the last word when it comes to explanatory force of computationalism
Turing test, easy to pass; human mind, hard to understand
Under general assumptions, the Turing test can be easily passed by an appropriate algorithm. I show that for any test satisfying several general conditions, we can construct an algorithm that can pass that test, hence, any operational definition is easy to fulfill. I suggest a test complementary to Turing's test, which will measure our understanding of the human mind. The Turing test is required to fix the operational specifications of the algorithm under test; under this constrain, the additional test simply consists in measuring the length of the algorithm
Post-Turing Methodology: Breaking the Wall on the Way to Artificial General Intelligence
This article offers comprehensive criticism of the Turing test and develops quality criteria for new artificial general intelligence (AGI) assessment tests. It is shown that the prerequisites A. Turing drew upon when reducing personality and human consciousness to “suitable branches of thought” re-flected the engineering level of his time. In fact, the Turing “imitation game” employed only symbolic communication and ignored the physical world. This paper suggests that by restricting thinking ability to symbolic systems alone Turing unknowingly constructed “the wall” that excludes any possi-bility of transition from a complex observable phenomenon to an abstract image or concept. It is, therefore, sensible to factor in new requirements for AI (artificial intelligence) maturity assessment when approaching the Tu-ring test. Such AI must support all forms of communication with a human being, and it should be able to comprehend abstract images and specify con-cepts as well as participate in social practices
The Annotation Game: On Turing (1950) on Computing, Machinery, and Intelligence
This quote/commented critique of Turing's classical paper suggests that Turing meant -- or should have meant -- the robotic version of the Turing Test (and not just the email version). Moreover, any dynamic system (that we design and understand) can be a candidate, not just a computational one. Turing also dismisses the other-minds problem and the mind/body problem too quickly. They are at the heart of both the problem he is addressing and the solution he is proposing
- …