328,318 research outputs found
Scientific requirements for an engineered model of consciousness
The building of a non-natural conscious system requires more than the design of physical or virtual machines with intuitively conceived abilities, philosophically elucidated architecture or hardware homologous to an animalās brain. Human society might one day treat a type of robot or computing system as an artificial person. Yet that would not answer scientific questions about the machineās consciousness or otherwise. Indeed, empirical tests for consciousness are impossible because no such entity is denoted within the theoretical structure of the science of mind, i.e. psychology. However, contemporary experimental psychology can identify if a specific mental process is conscious in particular circumstances, by theory-based interpretation of the overt performance of human beings. Thus, if we are to build a conscious machine, the artificial systems must be used as a test-bed for theory developed from the existing science that distinguishes conscious from non-conscious causation in natural systems. Only such a rich and realistic account of hypothetical processes accounting for observed input/output relationships can establish whether or not an engineered system is a model of consciousness. It follows that any research project on machine consciousness needs a programme of psychological experiments on the demonstration systems and that the programme should be designed to deliver a fully detailed scientific theory of the type of artificial mind being developed ā a Psychology of that Machine
Recommended from our members
Herbert Simon (1916-2001). The scientist of the artificial
With the disappearance of Herbert A. Simon, we have lost one of the most original thinkers of the 20th century. Highly influential in a number of scientific fieldsāsome of which he actually helped create, such as artificial intelligence or information-processing psychologyāSimon was a true polymath. His research started in management science and political science, later encompassed operations research, statistics and economics, and finally included computer science, artificial intelligence, psychology, education, philosophy of science, biology, and the sciences of design. His often controversial ideas earned him wide scientific recognition and essentially all the top awards of the fields in which he researched, including the Turing award from the Association of Computing Machinery, with Allen Newell, in 1975, the Nobel prize in economics, in 1978, and the Gold Medal Award for Psychological Science from the American Psychological Foundation, in 1988
Emulating Human Developmental Stages with Bayesian Neural Networks
We compare the acquisition of knowledge in humans and machines. Research from
the field of developmental psychology indicates, that human-employed hypothesis
are initially guided by simple rules, before evolving into more complex
theories. This observation is shared across many tasks and domains. We
investigate whether stages of development in artificial learning systems are
based on the same characteristics. We operationalize developmental stages as
the size of the data-set, on which the artificial system is trained. For our
analysis we look at the developmental progress of Bayesian Neural Networks on
three different data-sets, including occlusion, support and quantity comparison
tasks. We compare the results with prior research from developmental psychology
and find agreement between the family of optimized models and pattern of
development observed in infants and children on all three tasks, indicating
common principles for the acquisition of knowledge
Recommended from our members
Verbal analogy problem sets: An inventory of testing materials.
Analogical reasoning is an active topic of investigation across education, artificial intelligence (AI), cognitive psychology, and related fields. In all fields of inquiry, explicit analogy problems provide useful tools for investigating the mechanisms underlying analogical reasoning. Such sets have been developed by researchers working in the fields of educational testing, AI, and cognitive psychology. However, these analogy tests have not been systematically made accessible across all the relevant fields. The present paper aims to remedy this situation by presenting a working inventory of verbal analogy problem sets, intended to capture and organize sets from diverse sources
ID + MD = OD Towards a Fundamental Algorithm for Consciousness
The Algorithm described in this short paper is a simplified formal representation of consciousness that may be applied in the fields of Psychology and Artificial Intelligence.
Click on the download link to read full essay..
Integration of psychological models in the design of artificial creatures
Artificial creatures form an increasingly important component of interactive computer games. Examples of such creatures exist which can interact with each other and the game player and learn from their experiences. However, we argue, the design of the underlying architecture and algorithms has to a large extent overlooked knowledge from psychology and cognitive sciences. We explore the integration of observations from studies of motivational systems and emotional behaviour into the design of artificial creatures. An initial implementation of our ideas using the āsim agentā toolkit illustrates that physiological models can be used as the basis for creatures with animal like behaviour attributes. The current aim of this research is to increase the ārealismā of artificial creatures in interactive game-play, but it may have wider implications for the development of AI
Quantitative abstraction theory
A quantitative theory of abstraction is presented. The central feature of this is a growth formula defining the number of abstractions which may be formed by an individual agent in a given context. Implications of the theory for artificial intelligence and cognitive psychology are explored. Its possible applications to the issue of implicit v. explicit learning are also discussed
The perception of emotion in artificial agents
Given recent technological developments in robotics, artificial intelligence and virtual reality, it is perhaps unsurprising that the arrival of emotionally expressive and reactive artificial agents is imminent. However, if such agents are to become integrated into our social milieu, it is imperative to establish an understanding of whether and how humans perceive emotion in artificial agents. In this review, we incorporate recent findings from social robotics, virtual reality, psychology, and neuroscience to examine how people recognize and respond to emotions displayed by artificial agents. First, we review how people perceive emotions expressed by an artificial agent, such as facial and bodily expressions and vocal tone. Second, we evaluate the similarities and differences in the consequences of perceived emotions in artificial compared to human agents. Besides accurately recognizing the emotional state of an artificial agent, it is critical to understand how humans respond to those emotions. Does interacting with an angry robot induce the same responses in people as interacting with an angry person? Similarly, does watching a robot rejoice when it wins a game elicit similar feelings of elation in the human observer? Here we provide an overview of the current state of emotion expression and perception in social robotics, as well as a clear articulation of the challenges and guiding principles to be addressed as we move ever closer to truly emotional artificial agents
- ā¦