740 research outputs found
Machines Learning - Towards a New Synthetic Autobiographical Memory
Autobiographical memory is the organisation of episodes and contextual information from an individual’s experiences into a coherent narrative, which is key to our sense of self. Formation and recall of autobiographical memories is essential for effective, adaptive behaviour in the world, providing contextual information necessary for planning actions and memory functions such as event reconstruction. A synthetic autobiographical memory system would endow intelligent robotic agents with many essential components of cognition through active compression and storage of historical sensorimotor data in an easily addressable manner. Current approaches neither fulfil these functional requirements, nor build upon recent understanding of predictive coding, deep learning, nor the neurobiology of memory. This position paper highlights desiderata for a modern implementation of synthetic autobiographical memory based on human episodic memory, and proposes that a recently developed model of hippocampal memory could be extended as a generalised model of autobiographical memory. Initial implementation will be targeted at social interaction, where current synthetic autobiographical memory systems have had success
Motivations, Values and Emotions: 3 sides of the same coin
This position paper speaks to the interrelationships between the three concepts of motivations, values, and emotion. Motivations prime actions, values serve to choose between motivations, emotions provide a common currency for values, and emotions implement motivations. While conceptually distinct, the three are so pragmatically intertwined as to differ primarily from our taking different points of view. To make these points more transparent, we briefly describe the three in the context a cognitive architecture, the LIDA model, for software agents and robots that models human cognition, including a developmental period. We also compare the LIDA model with other models of cognition, some involving learning and emotions. Finally, we conclude that artificial emotions will prove most valuable as implementers of motivations in situations requiring learning and development
A Conceptual and Computational Model of Moral Decision Making in Human and Artificial Agents
Recently there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can
be tested in computer simulations such as software agents or embodied robots. The push to
implement computational models of this kind has created the field of Artificial General
Intelligence, or AGI.
Moral decision making is arguably one of the most challenging tasks for computational
approaches to higher order cognition. The need for increasingly autonomous artificial agents to
factor moral considerations into their choices and actions has given rise to another new field of inquiry variously known as Machine Morality, Machine Ethics, Roboethics or Friendly AI. In
this paper we discuss how LIDA, an AGI model of human cognition, can be adapted to model
both affective and rational features of moral decision making. Using the LIDA model we will
demonstrate how moral decisions can be made in many domains using the same mechanisms that
enable general decision making.
Comprehensive models of human cognition typically aim for compatibility with recent
research in the cognitive and neural sciences. Global Workspace Theory (GWT), proposed by
the neuropsychologist Bernard Baars (1988), is a highly regarded model of human cognition that
is currently being computationally instantiated in several software implementations. LIDA
(Franklin et al. 2005) is one such computational implementation. LIDA is both a set of
computational tools and an underlying model of human cognition, which provides mechanisms
that are capable of explaining how an agent’s selection of its next action arises from bottom-up collection of sensory data and top-down processes for making sense of its current situation. We will describe how the LIDA model helps integrate emotions into the human decision making process, and elucidate a process whereby an agent can work through an ethical problem to reach a solution that takes account of ethically relevant factors
Procedural-Reasoning Architecture for Applied Behavior Analysis-based Instructions
Autism Spectrum Disorder (ASD) is a complex developmental disability affecting as many as 1 in every 88 children. While there is no known cure for ASD, there are known behavioral and developmental interventions, based on demonstrated efficacy, that have become the predominant treatments for improving social, adaptive, and behavioral functions in children.
Applied Behavioral Analysis (ABA)-based early childhood interventions are evidence based, efficacious therapies for autism that are widely recognized as effective approaches to remediation of the symptoms of ASD. They are, however, labor intensive and consequently often inaccessible at the recommended levels.
Recent advancements in socially assistive robotics and applications of virtual intelligent agents have shown that children with ASD accept intelligent agents as effective and often preferred substitutes for human therapists. This research is nascent and highly experimental with no unifying, interdisciplinary, and integral approach to development of intelligent agents based therapies, especially not in the area of behavioral interventions.
Motivated by the absence of the unifying framework, we developed a conceptual procedural-reasoning agent architecture (PRA-ABA) that, we propose, could serve as a foundation for ABA-based assistive technologies involving virtual, mixed or embodied agents, including robots. This architecture and related research presented in this disser- tation encompass two main areas: (a) knowledge representation and computational model of the behavioral aspects of ABA as applicable to autism intervention practices, and (b) abstract architecture for multi-modal, agent-mediated implementation of these practices
Backwards is the way forward: feedback in the cortical hierarchy predicts the expected future
Clark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models)
Recommended from our members
A Study of Agent Influence in Nested Agent Interactions
This work develops a theory of agent influence and applies it to a coached system of simple reactive agents. Our notion of influence is intended to describe agent ability which is contingent on the actions of other agents and we view such behaviours as being “nested”. An agent may have the ability to make A hold only if another agent has carried out a particular action. Our analysis of this is based on a combination of the observation of the effects of an agent’s actions in a bounded environment and observations on what may be changed in that environment and is intended to allow for a logical representation of nested behaviours. We build on this notion to develop a theory of influence which we offer as an extension of existing systems for representing agency and its effects.
The notion of an agent being able to “see to it” that something is brought about has been a useful device for reasoning about agent ability. These so-called STIT semantics have been developed by a number of researchers. Standard STIT semantics allow statements of the form [α stit: A] which says that agent a has the ability to see to it that A holds. Although based on the concept of agent action STIT semantics also allow for the representation of concepts involving what may be thought of as inaction. An agent deciding, for example, not to execute a particular action may be characterised as seeing to it that it does not see to it that A, [α stit: [α stit: -A]]. STIT encourages nesting and although this nesting extends across actions within an agent it does not extend easily across agents. So called other agent statements of the form [β stit: [α stit: A]] do not make sense in standard stit semantics because β seeing to it that α sees to it that A holds implies that β has some dominion over a which, in turn, compromises α’s agency. Although the statement makes no sense under standard STIT it does make sense in an intuitive way and Brian Chellas [31] notes that it would be:
“...bizarre to deny that an agent should be able to see to it that another agent sees to something”
This is also mentioned in Belnap et al. [8, page 275]. Chellas is correct and there are numerous settings in which other agent STIT does make sense. These settings, which are captured in various readings of STIT, may bring a great deal of system level overhead. In a normative system, for example, β may have the option of imposing a sanction on α if α fails to bring about A and in this sense may be thought of as seeing to it that α sees to it that A holds. Similarly a deontic reading may place β in a position where it is able to place an obligation on α to bring about A. These readings allow for sensible interpretation of other agent STIT but the examples above require that agents have sufficient awareness of personal utility be able to manage sanctions or that they are able to reason about obligations. These readings offer nothing for simple agents with limited resources and abilities.
We offer another reading for the STIT element, one based on the concept of agent influence and one which carries minimal system level overhead. Because influence may be contingent on simultaneous or sequential behaviour by a number of agents it is extendible across agents and offers a means of addressing other agent statements. We extend the standard STIT semantics of Horty, Belnap and others with the introduction of “leads to” and “may lead to” operators which allow us to move our analysis into a setting where observation provides evidence of influence. We then explore the manifestation of influence in a number of scenarios. After exploring how influence manifests itself we then offer a partial logical characterisation of the influence operators and discuss its relationship with standard STIT.
Building on these semantics and the partial logical characterisation we then explore the practical use of our theory of influence in an agent learning system. We describe experiments with a system specified by safety and liveness properties and having two broad classes of agents, actors and coaches. Actor agents will manipulate their environment and coaching agents will observe the actor’s behaviour and its effects using aggregated observations to generate new behaviours which are then seeded in the environment to modify actor behaviour.
We then offer a discussion and evaluation of our theory and its applications indicating where it may be further developed and applied
- …