328,318 research outputs found

    Scientific requirements for an engineered model of consciousness

    Get PDF
    The building of a non-natural conscious system requires more than the design of physical or virtual machines with intuitively conceived abilities, philosophically elucidated architecture or hardware homologous to an animalā€™s brain. Human society might one day treat a type of robot or computing system as an artificial person. Yet that would not answer scientific questions about the machineā€™s consciousness or otherwise. Indeed, empirical tests for consciousness are impossible because no such entity is denoted within the theoretical structure of the science of mind, i.e. psychology. However, contemporary experimental psychology can identify if a specific mental process is conscious in particular circumstances, by theory-based interpretation of the overt performance of human beings. Thus, if we are to build a conscious machine, the artificial systems must be used as a test-bed for theory developed from the existing science that distinguishes conscious from non-conscious causation in natural systems. Only such a rich and realistic account of hypothetical processes accounting for observed input/output relationships can establish whether or not an engineered system is a model of consciousness. It follows that any research project on machine consciousness needs a programme of psychological experiments on the demonstration systems and that the programme should be designed to deliver a fully detailed scientific theory of the type of artificial mind being developed ā€“ a Psychology of that Machine

    Emulating Human Developmental Stages with Bayesian Neural Networks

    Get PDF
    We compare the acquisition of knowledge in humans and machines. Research from the field of developmental psychology indicates, that human-employed hypothesis are initially guided by simple rules, before evolving into more complex theories. This observation is shared across many tasks and domains. We investigate whether stages of development in artificial learning systems are based on the same characteristics. We operationalize developmental stages as the size of the data-set, on which the artificial system is trained. For our analysis we look at the developmental progress of Bayesian Neural Networks on three different data-sets, including occlusion, support and quantity comparison tasks. We compare the results with prior research from developmental psychology and find agreement between the family of optimized models and pattern of development observed in infants and children on all three tasks, indicating common principles for the acquisition of knowledge

    ID + MD = OD Towards a Fundamental Algorithm for Consciousness

    Get PDF
    The Algorithm described in this short paper is a simplified formal representation of consciousness that may be applied in the fields of Psychology and Artificial Intelligence. Click on the download link to read full essay..

    Integration of psychological models in the design of artificial creatures

    Get PDF
    Artificial creatures form an increasingly important component of interactive computer games. Examples of such creatures exist which can interact with each other and the game player and learn from their experiences. However, we argue, the design of the underlying architecture and algorithms has to a large extent overlooked knowledge from psychology and cognitive sciences. We explore the integration of observations from studies of motivational systems and emotional behaviour into the design of artificial creatures. An initial implementation of our ideas using the ā€œsim agentā€ toolkit illustrates that physiological models can be used as the basis for creatures with animal like behaviour attributes. The current aim of this research is to increase the ā€œrealismā€ of artificial creatures in interactive game-play, but it may have wider implications for the development of AI

    Quantitative abstraction theory

    Get PDF
    A quantitative theory of abstraction is presented. The central feature of this is a growth formula defining the number of abstractions which may be formed by an individual agent in a given context. Implications of the theory for artificial intelligence and cognitive psychology are explored. Its possible applications to the issue of implicit v. explicit learning are also discussed

    The perception of emotion in artificial agents

    Get PDF
    Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents
    • ā€¦
    corecore