When is it Justifiable to Ascribe Mental States to Non-Human Systems?

Abstract

In this thesis I shall attempt to show when it is, and when it is not, justifiable to ascribe mental states, of the type that we associate with the complex cognitive behaviour of human beings, to non-human systems. To do this I will first attempt to give a fundamental explication of some of the problems that underlie our ascription of mental states to other human beings, non-human animals and machines, after which I will tackle the problem of whether or not any ascription of mentality can ever be completely vindicated. Then I will look at the issues of complexity and the distinctions that hold between the capabilities of various systems, both natural and artificial. The result of this will be a more comprehensive understanding of what characteristics are necessary for the possession of such capabilities. I will go on to argue that a positive relation exists between a system's architecture and its capability to behave or act in ways that can be classed as one of a number of mental states such as 'knowing', 'understanding' or 'believing'. I shall look at the ways in which machine states and mental states have been examined using hierarchical stratifications for these can offer us some indication of the correlation that exists between simple systems and the low level actions of which they are capable, and the more sophisticated actions of which only progressively more complex systems are capable. However, I shall put forward arguments to demonstrate that this is a feasible strategy when dealing with the innards of a machine but not for dealing with the innards of the mind. Throughout the thesis I shall try to clarify the inexplicit or clouded notions of subjectivity and intentionality, for one of my aims is to demonstrate that the notions of subjectivity and awareness are more important than intentionality in the distinction between human and non-human systems

    Similar works