1 research outputs found
Software Agents with Concerns of their Own
We claim that it is possible to have artificial software agents for which
their actions and the world they inhabit have first-person or intrinsic
meanings. The first-person or intrinsic meaning of an entity to a system is
defined as its relation with the system's goals and capabilities, given the
properties of the environment in which it operates. Therefore, for a system to
develop first-person meanings, it must see itself as a goal-directed actor,
facing limitations and opportunities dictated by its own capabilities, and by
the properties of the environment. The first part of the paper discusses this
claim in the context of arguments against and proposals addressing the
development of computer programs with first-person meanings. A set of
definitions is also presented, most importantly the concepts of cold and
phenomenal first-person meanings. The second part of the paper presents
preliminary proposals and achievements, resulting of actual software
implementations, within a research approach that aims to develop software
agents that intrinsically understand their actions and what happens to them. As
a result, an agent with no a priori notion of its goals and capabilities, and
of the properties of its environment acquires all these notions by observing
itself in action. The cold first-person meanings of the agent's actions and of
what happens to it are defined using these acquired notions. Although not
solving the full problem of first-person meanings, the proposed approach and
preliminary results allow us some confidence to address the problems yet to be
considered, in particular the phenomenal aspect of first-person meanings