563 research outputs found

    Symbol grounding and its implications for artificial intelligence

    Get PDF
    In response to Searle's well-known Chinese room argument against Strong AI (and more generally, computationalism), Harnad proposed that if the symbols manipulated by a robot were sufficiently grounded in the real world, then the robot could be said to literally understand. In this article, I expand on the notion of symbol groundedness in three ways. Firstly, I show how a robot might select the best set of categories describing the world, given that fundamentally continuous sensory data can be categorised in an almost infinite number of ways. Secondly, I discuss the notion of grounded abstract (as opposed to concrete) concepts. Thirdly, I give an objective criterion for deciding when a robot's symbols become sufficiently grounded for "understanding" to be attributed to it. This deeper analysis of what symbol groundedness actually is weakens Searle's position in significant ways; in particular, whilst Searle may be able to refute Strong AI in the specific context of present-day digital computers, he cannot refute computationalism in general

    The errors, insights and lessons of famous AI predictions – and what they mean for the future

    Get PDF
    Predicting the development of artificial intelligence (AI) is a difficult project – but a vital one, according to some analysts. AI predictions already abound: but are they reliable? This paper will start by proposing a decomposition schema for classifying them. Then it constructs a variety of theoretical tools for analysing, judging and improving them. These tools are demonstrated by careful analysis of five famous AI predictions: th

    Philosophy of Computer Science: An Introductory Course

    Get PDF
    There are many branches of philosophy called “the philosophy of X,” where X = disciplines ranging from history to physics. The philosophy of artificial intelligence has a long history, and there are many courses and texts with that title. Surprisingly, the philosophy of computer science is not nearly as well-developed. This article proposes topics that might constitute the philosophy of computer science and describes a course covering those topics, along with suggested readings and assignments

    Concluding remarks

    Get PDF

    Is thinking computable?

    Get PDF
    Strong artificial intelligence claims that conscious thought can arise in computers containing the right algorithms even though none of the programs or components of those computers understand which is going on. As proof, it asserts that brains are finite webs of neurons, each with a definite function governed by the laws of physics; this web has a set of equations that can be solved (or simulated) by a sufficiently powerful computer. Strong AI claims the Turing test as a criterion of success. A recent debate in Scientific American concludes that the Turing test is not sufficient, but leaves intact the underlying premise that thought is a computable process. The recent book by Roger Penrose, however, offers a sharp challenge, arguing that the laws of quantum physics may govern mental processes and that these laws may not be computable. In every area of mathematics and physics, Penrose finds evidence of nonalgorithmic human activity and concludes that mental processes are inherently more powerful than computational processes

    A Way to Describe and Evaluate Thought Experiments, or Trying to Get a Grip on Virtual Reality

    Get PDF
    The use of thought experiments seems to provoke much controversy, often in the form of charges of appeals to intuition. The notion of intuition, however, is vaguely defined in both the context of thought experiments and in philosophy in general. This vagueness suggests that the description of thought experiments is incomplete, and thus the prospect for their evaluation remains unfulfilled. Previous analyses of thought experiments have come largely from philosophy where the focus has been on truth value and validity. But these approaches seem to view argument monologically; no accommodation of an audience response like intuition is possible. I try to show that van Eemeren and Grootendorst\u27s pragma-dialectical model provides a framework for analyzing thought experiments and evaluating them because it treats thought experiments as part of a dialogue and as the result of a perspective

    Should Robots Be Like Humans? A Pragmatic Approach to Social Robotics

    Get PDF
    This paper describes the instrumentalizing aspects of social robots, which generate the term pragmatic social robot. In contrast to humanoid robots, pragmatic social robots (PSRs) are defined by their instrumentalizing aspects, which consist of language, skill, and artificial intelligence. These technical aspects of social robots have led to the tendency to attribute a selfhood characteristic or anthropomorphism. Anthropomorphism can raise problems of responsibility and the ontological problems of human-technology relations. As a result, there is an antinomy in the research and development of pragmatic social robotics, considering that they are expected to achieve similarity with humans in terms of completing works. How can we avoid anthropomorphism in the research and development of PSRs while ensuring their flexibility? In response to this issue, I suggest intuition should be instrumentalized to advance PSRs’ social skills. Intuition, as theorized by Henry Bergson and Efraim Fischbein, overcomes the capacity of logical analysis to solve problems. Robots should be like humans in the sense that their instrumentalizing aspects meet the criteria for the value of human social skills
    • 

    corecore