410 research outputs found
The errors, insights and lessons of famous AI predictions – and what they mean for the future
Predicting the development of artificial intelligence (AI) is a difficult project – but a vital one, according to some analysts. AI predictions already abound: but are they reliable? This paper will start by proposing a decomposition schema for classifying them. Then it constructs a variety of theoretical tools for analysing, judging and improving them. These tools are demonstrated by careful analysis of five famous AI predictions: th
A Way to Describe and Evaluate Thought Experiments, or Trying to Get a Grip on Virtual Reality
The use of thought experiments seems to provoke much controversy, often in the form of charges of appeals to intuition. The notion of intuition, however, is vaguely defined in both the context of thought experiments and in philosophy in general. This vagueness suggests that the description of thought experiments is incomplete, and thus the prospect for their evaluation remains unfulfilled. Previous analyses of thought experiments have come largely from philosophy where the focus has been on truth value and validity. But these approaches seem to view argument monologically; no accommodation of an audience response like intuition is possible. I try to show that van Eemeren and Grootendorst\u27s pragma-dialectical model provides a framework for analyzing thought experiments and evaluating them because it treats thought experiments as part of a dialogue and as the result of a perspective
Alan Turing and the “hard” and “easy” problem of cognition: doing and feeling
The "easy" problem of cognitive science is explaining how and why we can do what we can do. The "hard" problem is explaining how and why we feel. Turing's methodology for cognitive science (the Turing Test) is based on doing: Design a model that can do anything a human can do, indistinguishably from a human, to a human, and you have explained cognition. Searle has shown that the successful model cannot be solely computational. Sensory-motor robotic capacities are necessary to ground some, at least, of the model's words, in what the robot can do with the things in the world that the words are about. But even grounding is not enough to guarantee that -- nor to explain how and why -- the model feels (if it does). That problem is much harder to solve (and perhaps insoluble)
Recommended from our members
Seeing things as people: anthropomorphism and common-sense psychology
This thesis is about common-sense psychology and its role in cognitive science. Put simply, the argument is that common-sense psychology is important because it offers clues to some complex problems in cognitive science, and because common-sense psychology has significant effects on our intuitions, both in science and on an everyday level.
The thesis develops a theory of anthropomorphism in common-sense psychology. Anthropomorphism, the natural human tendency to ascribe human characteristics (and especially human mental characteristics) to things that aren't human, is an important theme in the thesis. Anthropomorphism reveals an endemic anthropocentricity that deeply influences our thinking about other minds. The thesis then constructs a descriptive model of anthropomorphism in common-sense psychology, and uses it to analyse two studies of the ascription of mental states. The first, Baron- Cohen et al. 's (1985) false belief test, shows how cognitive modelling can be used to compare different theories of common-sense psychology. The second study, Searle's (1980) `Chinese Room', shows 'that this same model can reproduce the patterns of scientific intuitions taken to systems which pass the Turing test (Turing, 1950), suggesting that it is best seen as a common-sense test for a mind, not a scientific one. Finally, the thesis argues that scientific theories involving the ascription of mentality through a model or a metaphor are partly dependent on each individual scientist's common-sense psychology.
To conclude, this thesis develops an interdisciplinary study of common-sense psychology and shows that its effects are more wide ranging than is commonly thought. This means that it affects science more than might be expected, but that careful study can help us to become mindful of these effects. Within this new framework, a proper understanding of common-sense psychology could lay important new foundations for the future of cognitive science
Is thinking computable?
Strong artificial intelligence claims that conscious thought can arise in computers containing the right algorithms even though none of the programs or components of those computers understand which is going on. As proof, it asserts that brains are finite webs of neurons, each with a definite function governed by the laws of physics; this web has a set of equations that can be solved (or simulated) by a sufficiently powerful computer. Strong AI claims the Turing test as a criterion of success. A recent debate in Scientific American concludes that the Turing test is not sufficient, but leaves intact the underlying premise that thought is a computable process. The recent book by Roger Penrose, however, offers a sharp challenge, arguing that the laws of quantum physics may govern mental processes and that these laws may not be computable. In every area of mathematics and physics, Penrose finds evidence of nonalgorithmic human activity and concludes that mental processes are inherently more powerful than computational processes
Should Robots Be Like Humans? A Pragmatic Approach to Social Robotics
This paper describes the instrumentalizing aspects of social robots, which generate the term pragmatic social robot. In contrast to humanoid robots, pragmatic social robots (PSRs) are defined by their instrumentalizing aspects, which consist of language, skill, and artificial intelligence. These technical aspects of social robots have led to the tendency to attribute a selfhood characteristic or anthropomorphism. Anthropomorphism can raise problems of responsibility and the ontological problems of human-technology relations. As a result, there is an antinomy in the research and development of pragmatic social robotics, considering that they are expected to achieve similarity with humans in terms of completing works. How can we avoid anthropomorphism in the research and development of PSRs while ensuring their flexibility? In response to this issue, I suggest intuition should be instrumentalized to advance PSRs’ social skills. Intuition, as theorized by Henry Bergson and Efraim Fischbein, overcomes the capacity of logical analysis to solve problems. Robots should be like humans in the sense that their instrumentalizing aspects meet the criteria for the value of human social skills
Philosophy of Computer Science: An Introductory Course
There are many branches of philosophy called “the philosophy of X,” where X = disciplines ranging from history to physics. The philosophy of artificial intelligence has a long history, and there are many courses and texts with that title. Surprisingly, the philosophy of computer science is not nearly as well-developed. This article proposes topics that might constitute the philosophy of computer science and describes a course covering those topics, along with suggested readings and assignments
Semantic Internalism Is a Mistake
The concept of narrow content is still under discussion in the debate
over mental representation. In the paper, one-factor dimensional
accounts of representation are analyzed, particularly the case of Fodor's
methodological solipsism. In methodological solipsism, semantic
properties of content are arguably eliminated in favor of syntactic ones.
If “narrow content” means content properties independent of external
factors to a system (as in Segal's view), the concept of content becomes
elusive. Moreover, important conceptual problems with one-factor
dimensional account are pointed out as a result of analysis arguments
presented by J. Searle, S. Harnad and T. Burge. Furthermore, these
problems are illustrated with psychological and ethological examples.
Although understanding content as partially independent from
contextual factors allows theorists to preserve content properties, it
seems that understanding content in total abstraction from external
factors of these properties is implausible. As a result, internalism is
rejected in favor of externalism.Publikacja została sfinansowana ze środków Ministerstwa Nauki i Szkolnictwa Wyższego w ramach programu Narodowego Programu Rozwoju Humanistyki przyznanych na podstawie decyzji 0014/NPRH4/H3b/83/2016 - projekt „Przygotowanie i publikacja dwóch anglojęzycznych numerów monograficznych Internetowego Magazynu Filozoficznego HYBRIS” (3bH 15 0014 83)
Cohabitation: Computation at 70, Cognition at 20
Zenon Pylyshyn cast cognition's lot with computation, stretching the Church/Turing Thesis to its limit: We had no idea how the mind did anything, whereas we knew computation could do just about everything. Doing it with images would be like doing it with mirrors, and little men in mirrors. So why not do it all with symbols and rules instead? Everything worthy of the name "cognition," anyway; not what was too thick for cognition to penetrate. It might even solve the mind/body problem if the soul, like software, were independent of its physical incarnation. It looked like we had the architecture of cognition virtually licked. Even neural nets could be either simulated or subsumed. But then came Searle, with his sino-spoiler thought experiment, showing that cognition cannot be all computation (though not, as Searle thought, that it cannot be computation at all). So if cognition has to be hybrid sensorimotor/symbolic, it turns out we've all just been haggling over the price, instead of delivering the goods, as Turing had originally proposed 5 decades earlier
- …