644 research outputs found
Learning, Social Intelligence and the Turing Test - why an "out-of-the-box" Turing Machine will not pass the Turing Test
The Turing Test (TT) checks for human intelligence, rather than any putative
general intelligence. It involves repeated interaction requiring learning in
the form of adaption to the human conversation partner. It is a macro-level
post-hoc test in contrast to the definition of a Turing Machine (TM), which is
a prior micro-level definition. This raises the question of whether learning is
just another computational process, i.e. can be implemented as a TM. Here we
argue that learning or adaption is fundamentally different from computation,
though it does involve processes that can be seen as computations. To
illustrate this difference we compare (a) designing a TM and (b) learning a TM,
defining them for the purpose of the argument. We show that there is a
well-defined sequence of problems which are not effectively designable but are
learnable, in the form of the bounded halting problem. Some characteristics of
human intelligence are reviewed including it's: interactive nature, learning
abilities, imitative tendencies, linguistic ability and context-dependency. A
story that explains some of these is the Social Intelligence Hypothesis. If
this is broadly correct, this points to the necessity of a considerable period
of acculturation (social learning in context) if an artificial intelligence is
to pass the TT. Whilst it is always possible to 'compile' the results of
learning into a TM, this would not be a designed TM and would not be able to
continually adapt (pass future TTs). We conclude three things, namely that: a
purely "designed" TM will never pass the TT; that there is no such thing as a
general intelligence since it necessary involves learning; and that
learning/adaption and computation should be clearly distinguished.Comment: 10 pages, invited talk at Turing Centenary Conference CiE 2012,
special session on "The Turing Test and Thinking Machines
The Nexus between Artificial Intelligence and Economics
This book is organized as follows. Section 2 introduces the notion of the Singularity, a stage in development in which technological progress and economic growth increase at a near-infinite rate. Section 3 describes what artificial intelligence is and how it has been applied. Section 4 considers artificial happiness and the likelihood that artificial intelligence might increase human happiness. Section 5 discusses some prominent related concepts and issues. Section 6 describes the use of artificial agents in economic modeling, and section 7 considers some ways in which economic analysis can offer some hints about what the advent of artificial intelligence might bring. Chapter 8 presents some thoughts about the current state of AI and its future prospects.
Information and Computation
In this chapter, concepts related to information and computation are reviewed
in the context of human computation. A brief introduction to information theory
and different types of computation is given. Two examples of human computation
systems, online social networks and Wikipedia, are used to illustrate how these
can be described and compared in terms of information and computation.Comment: 9 pages, 3 figures. Draft of a chapter to be published in Michelucci,
P. (Ed.) Handbook of Human Computation, Springe
Questioning Turing test
The Turing Test (TT) is an experimental paradigm to test for
intelligence, where an entity’s intelligence is inferred from its ability,
during a text-based conversation, to be recognized as a human by the
human judge. The advantage of this paradigm is that it encourages
alternative versions of the test to be designed; and it can include any
field of human endeavour. However, it has two major problems: (i) it
can be passed by an entity that produces uncooperative but human-like
responses (Artificial Stupidity); and (ii) it is not sensitive to how the
entity produces the conversation (Blockhead).
In light of these two problems, I propose a new version of the TT, the
Questioning Turing Test (QTT). In the QTT, the task of the entity is not
to hold a conversation, but to accomplish an enquiry with as few
human-like questions as possible. The job of the human judge is to
provide the answers and, like in the TT, to decide whether the entity is
human or machine.
The QTT has the advantage of parametrising the entity along two
further dimensions in addition to ‘human-likeness’: ‘correctness’,
evaluating if the entity accomplishes the enquiry; and ‘strategicness’,
evaluating how well the entity carries out the enquiry, in terms of the
number of questions asked – the fewer, the better. Moreover, in the
experimental design of the QTT, the test is not the enquiry per se, but
rather the comparison between the performances of humans and
machines. The results gained from the QTT show that its experimental
design minimises false positives and negatives; and avoids both
Artificial Stupidity and Blockhead
Lola v Skadden and the Automation of the Legal Profession
Technological innovation has accelerated at an exponential pace in the last few decades, ushering in an era of unprecedented advancements in algorithms and artificial intelligence technologies. Traditionally, the legal field has protected itself from technological disruptions by maintaining a professional monopoly over legal work and limiting the practice of law to only those who are licensed
Artificial Intelligence Through the Eyes of the Public
Artificial Intelligence is becoming a popular field in computer science. In this report we explored its history, major accomplishments and the visions of its creators. We looked at how Artificial Intelligence experts influence reporting and engineered a survey to gauge public opinion. We also examined expert predictions concerning the future of the field as well as media coverage of its recent accomplishments. These results were then used to explore the links between expert opinion, public opinion and media coverage
Complexity and Information: Measuring Emergence, Self-organization, and Homeostasis at Multiple Scales
Concepts used in the scientific study of complex systems have become so
widespread that their use and abuse has led to ambiguity and confusion in their
meaning. In this paper we use information theory to provide abstract and
concise measures of complexity, emergence, self-organization, and homeostasis.
The purpose is to clarify the meaning of these concepts with the aid of the
proposed formal measures. In a simplified version of the measures (focusing on
the information produced by a system), emergence becomes the opposite of
self-organization, while complexity represents their balance. Homeostasis can
be seen as a measure of the stability of the system. We use computational
experiments on random Boolean networks and elementary cellular automata to
illustrate our measures at multiple scales.Comment: 42 pages, 11 figures, 2 table
"Narratives of Technological Globalization and Outsourced Call Centers in India: Droids, Mimic Machines, Automatons, and Bad 'Borgs"
Taking as a premise that contemporary communication and information technology-aided globalization is a world-making practice that relies on narratives to construct and transmit global imaginaries and relational identities, this project evaluates narrative discourses of Indian call centers serving clients and customers in the United States in order to analyze evolving hegemonic narratives of the "global" of late capitalism. In this project, I argue that constellations of local, national, and transnational hegemonies map outsourced Indian call centers into a position in capitalist geography at which vectors of the production of national, corporate, and racial identities converge in the technological production of power. Specifically, I look at assemblages of narratives corresponding to four tropes that metaphorize Indian tech workers as machines--the oriental droid, the mimic machine, the productive automaton, and the bad 'borg--how each metaphor is reinforced by labor processes in the call center, and how these are assimilated into new forms of subjugation to further extend and entrench participation in zero-sum economics across the globe
- …