3 research outputs found
Variations of the Turing Test in the Age of Internet and Virtual Reality
Inspired by Hofstadter's Coffee-House Conversation (1982) and by the science
fiction short story SAM by Schattschneider (1988), we propose and discuss
criteria for non-mechanical intelligence. Firstly, we emphasize the practical
need for such tests in view of massively multiuser online role-playing games
(MMORPGs) and virtual reality systems like Second Life. Secondly, we
demonstrate Second Life as a useful framework for implementing (some iterations
of) that test
On the universality of cognitive tests
The analysis of the adaptive behaviour of many different kinds of systems
such as humans, animals and machines, requires more general ways of assessing
their cognitive abilities. This need is strengthened by increasingly more tasks
being analysed for and completed by a wider diversity of systems, including
swarms and hybrids. The notion of universal test has recently emerged in the
context of machine intelligence evaluation as a way to define and use the same
cognitive test for a variety of systems, using some principled tasks and
adapting the interface to each particular subject. However, how far can
universal tests be taken? This paper analyses this question in terms of
subjects, environments, space-time resolution, rewards and interfaces. This
leads to a number of findings, insights and caveats, according to several
levels where universal tests may be progressively more difficult to conceive,
implement and administer. One of the most significant contributions is given by
the realisation that more universal tests are defined as maximisations of less
universal tests for a variety of configurations. This means that universal
tests must be necessarily adaptive
Logical Limitations to Machine Ethics with Consequences to Lethal Autonomous Weapons
Lethal Autonomous Weapons promise to revolutionize warfare -- and raise a
multitude of ethical and legal questions. It has thus been suggested to program
values and principles of conduct (such as the Geneva Conventions) into the
machines' control, thereby rendering them both physically and morally superior
to human combatants.
We employ mathematical logic and theoretical computer science to explore
fundamental limitations to the moral behaviour of intelligent machines in a
series of "Gedankenexperiments": Refining and sharpening variants of the
Trolley Problem leads us to construct an (admittedly artificial but) fully
deterministic situation where a robot is presented with two choices: one
morally clearly preferable over the other -- yet, based on the undecidability
of the Halting problem, it provably cannot decide algorithmically which one.
Our considerations have surprising implications to the question of
responsibility and liability for an autonomous system's actions and lead to
specific technical recommendations