232 research outputs found
Control of Mobile Robots Using the Soar Cognitive Architecture
Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/77099/1/AIAA-37056-144.pd
The Roles of Symbols in Neural-based AI: They are Not What You Think!
We propose that symbols are first and foremost external communication tools
used between intelligent agents that allow knowledge to be transferred in a
more efficient and effective manner than having to experience the world
directly. But, they are also used internally within an agent through a form of
self-communication to help formulate, describe and justify subsymbolic patterns
of neural activity that truly implement thinking. Symbols, and our languages
that make use of them, not only allow us to explain our thinking to others and
ourselves, but also provide beneficial constraints (inductive bias) on learning
about the world. In this paper we present relevant insights from neuroscience
and cognitive science, about how the human brain represents symbols and the
concepts they refer to, and how today's artificial neural networks can do the
same. We then present a novel neuro-symbolic hypothesis and a plausible
architecture for intelligent agents that combines subsymbolic representations
for symbols and concepts for learning and reasoning. Our hypothesis and
associated architecture imply that symbols will remain critical to the future
of intelligent systems NOT because they are the fundamental building blocks of
thought, but because they are characterizations of subsymbolic processes that
constitute thought.Comment: 28 page
Artificial intelligence approaches for the generation and assessment of believable human-like behaviour in virtual characters
Having artificial agents to autonomously produce human-like behaviour is one of the most ambitious original goals of Artificial Intelligence (AI) and remains an open problem nowadays. The imitation game originally proposed by Turing constitute a very effective method to prove the indistinguishability of an artificial agent. The behaviour of an agent is said to be indistinguishable from that of a human when observers (the so-called judges in the Turing test) can not tell apart humans and non-human agents. Different environments, testing protocols, scopes and problem domains can be established to develop limited versions or variants of the original Turing test. In this paper we use a specific version of the Turing test, based on the international BotPrize competition, built in a First-Person Shooter video game, where both human players and non-player characters interact in complex virtual environments. Based on our past experience both in the BotPrize competition and other robotics and computer game AI applications we have developed three new more advanced controllers for believable agents: two based on a combination of the CERA-CRANIUM and SOAR cognitive architectures and other based on ADANN, a system for the automatic evolution and adaptation of artificial neural networks. These two new agents have been put to the test jointly with CCBot3, the winner of BotPrize 2010 competition [1], and have showed a significant improvement in the humanness ratio. Additionally, we have confronted all these bots to both First-person believability assessment (BotPrize original judging protocol) and Third-person believability assess- ment, demonstrating that the active involvement of the judge has a great impact in the recognition of human-like behaviour.MICINN -Ministerio de Ciencia e Innovación(FCT-13-7848
TALK COMMONSENSE TO ME! ENRICHING LANGUAGE MODELS WITH COMMONSENSE KNOWLEDGE
Human cognition is exciting, it is a mesh up of several neural phenomena which really
strive our ability to constantly reason and infer about the involving world. In cognitive
computer science, Commonsense Reasoning is the terminology given to our ability to
infer uncertain events and reason about Cognitive Knowledge. The introduction of Commonsense
to intelligent systems has been for years desired, but the mechanism for this
introduction remains a scientific jigsaw. Some, implicitly believe language understanding
is enough to achieve some level of Commonsense [90]. In a less common ground, there
are others who think enriching language with Knowledge Graphs might be enough for
human-like reasoning [63], while there are others who believe human-like reasoning can
only be truly captured with symbolic rules and logical deduction powered by Knowledge
Bases, such as taxonomies and ontologies [50]. We focus on Commonsense Knowledge
integration to Language Models, because we believe that this integration is a step towards
a beneficial embedding of Commonsense Reasoning to interactive Intelligent Systems,
such as conversational assistants.
Conversational assistants, such as Alexa from Amazon, are user driven systems. Thus,
giving birth to a more human-like interaction is strongly desired to really capture the
user’s attention and empathy. We believe that such humanistic characteristics can be
leveraged through the introduction of stronger Commonsense Knowledge and Reasoning
to fruitfully engage with users.
To this end, we intend to introduce a new family of models, the Relation-Aware
BART (RA-BART), leveraging language generation abilities of BART [51] with explicit
Commonsense Knowledge extracted from Commonsense Knowledge Graphs to further
extend human capabilities on these models.
We evaluate our model on three different tasks: Abstractive Question Answering, Text
Generation conditioned on certain concepts and aMulti-Choice Question Answering task.
We find out that, on generation tasks, RA-BART outperforms non-knowledge enriched
models, however, it underperforms on the multi-choice question answering task.
Our Project can be consulted in our open source, public GitHub repository (Explicit
Commonsense).A cognição humana é entusiasmante, é uma malha de vários fenómenos neuronais que
nos estimulam vivamente a capacidade de raciocinar e inferir constantemente sobre o
mundo envolvente. Na ciência cognitiva computacional, o raciocínio de senso comum é
a terminologia dada à nossa capacidade de inquirir sobre acontecimentos incertos e de
raciocinar sobre o conhecimento cognitivo. A introdução do senso comum nos sistemas
inteligentes é desejada há anos, mas o mecanismo para esta introdução continua a ser
um quebra-cabeças científico. Alguns acreditam que apenas compreensão da linguagem
é suficiente para alcançar o senso comum [90], num campo menos similar há outros que
pensam que enriquecendo a linguagem com gráfos de conhecimento pode serum caminho
para obter um raciocínio mais semelhante ao ser humano [63], enquanto que há outros
ciêntistas que acreditam que o raciocínio humano só pode ser verdadeiramente capturado
com regras simbólicas e deduções lógicas alimentadas por bases de conhecimento, como
taxonomias e ontologias [50]. Concentramo-nos na integração de conhecimento de censo
comum em Modelos Linguísticos, acreditando que esta integração é um passo no sentido
de uma incorporação benéfica no racíocinio de senso comum em Sistemas Inteligentes
Interactivos, como é o caso dos assistentes de conversação.
Assistentes de conversação, como o Alexa da Amazon, são sistemas orientados aos
utilizadores. Assim, dar origem a uma comunicação mais humana é fortemente desejada
para captar realmente a atenção e a empatia do utilizador. Acreditamos que tais características
humanísticas podem ser alavancadas por meio de uma introdução mais rica de
conhecimento e raciocínio de senso comum de forma a proporcionar uma interação mais
natural com o utilizador.
Para tal, pretendemos introduzir uma nova família de modelos, o Relation-Aware
BART (RA-BART), alavancando as capacidades de geração de linguagem do BART [51]
com conhecimento de censo comum extraído a partir de grafos de conhecimento explícito
de senso comum para alargar ainda mais as capacidades humanas nestes modelos.
Avaliamos o nosso modelo em três tarefas distintas: Respostas a Perguntas Abstratas,
Geração de Texto com base em conceitos e numa tarefa de Resposta a Perguntas de Escolha Múltipla . Descobrimos que, nas tarefas de geração, o RA-BART tem um desempenho
superior aos modelos sem enriquecimento de conhecimento, contudo, tem um
desempenho inferior na tarefa de resposta a perguntas de múltipla escolha.
O nosso Projecto pode ser consultado no nosso repositório GitHub público, de código
aberto (Explicit Commonsense)
Symbol Emergence in Robotics: A Survey
Humans can learn the use of language through physical interaction with their
environment and semiotic communication with other people. It is very important
to obtain a computational understanding of how humans can form a symbol system
and obtain semiotic skills through their autonomous mental development.
Recently, many studies have been conducted on the construction of robotic
systems and machine-learning methods that can learn the use of language through
embodied multimodal interaction with their environment and other systems.
Understanding human social interactions and developing a robot that can
smoothly communicate with human users in the long term, requires an
understanding of the dynamics of symbol systems and is crucially important. The
embodied cognition and social interaction of participants gradually change a
symbol system in a constructive manner. In this paper, we introduce a field of
research called symbol emergence in robotics (SER). SER is a constructive
approach towards an emergent symbol system. The emergent symbol system is
socially self-organized through both semiotic communications and physical
interactions with autonomous cognitive developmental agents, i.e., humans and
developmental robots. Specifically, we describe some state-of-art research
topics concerning SER, e.g., multimodal categorization, word discovery, and a
double articulation analysis, that enable a robot to obtain words and their
embodied meanings from raw sensory--motor information, including visual
information, haptic information, auditory information, and acoustic speech
signals, in a totally unsupervised manner. Finally, we suggest future
directions of research in SER.Comment: submitted to Advanced Robotic
Implications of Computational Cognitive Models for Information Retrieval
This dissertation explores the implications of computational cognitive modeling for information retrieval. The parallel between information retrieval and human memory is that the goal of an information retrieval system is to find the set of documents most relevant to the query whereas the goal for the human memory system is to access the relevance of items stored in memory given a memory probe (Steyvers & Griffiths, 2010).
The two major topics of this dissertation are desirability and information scent. Desirability is the context independent probability of an item receiving attention (Recker & Pitkow, 1996). Desirability has been widely utilized in numerous experiments to model the probability that a given memory item would be retrieved (Anderson, 2007). Information scent is a context dependent measure defined as the utility of an information item (Pirolli & Card, 1996b). Information scent has been widely utilized to predict the memory item that would be retrieved given a probe (Anderson, 2007) and to predict the browsing behavior of humans (Pirolli & Card, 1996b).
In this dissertation, I proposed the theory that desirability observed in human memory is caused by preferential attachment in networks. Additionally, I showed that documents accessed in large repositories mirror the observed statistical properties in human memory and that these properties can be used to improve document ranking. Finally, I showed that the combination of information scent and desirability improves document ranking over existing well-established approaches
A Cognitive Modeling Approach to Strategy Formation in Dynamic Decision Making
Decision-making is a high-level cognitive process based on cognitive processes like perception, attention, and memory. Real-life situations require series of decisions to be made, with each decision depending on previous feedback from a potentially changing environment. To gain a better understanding of the underlying processes of dynamic decision-making, we applied the method of cognitive modeling on a complex rule-based category learning task. Here, participants first needed to identify the conjunction of two rules that defined a target category and later adapt to a reversal of feedback contingencies. We developed an ACT-R model for the core aspects of this dynamic decision-making task. An important aim of our model was that it provides a general account of how such tasks are solved and, with minor changes, is applicable to other stimulus materials. The model was implemented as a mixture of an exemplar-based and a rule-based approach which incorporates perceptual-motor and metacognitive aspects as well. The model solves the categorization task by first trying out one-feature strategies and then, as a result of repeated negative feedback, switching to two-feature strategies. Overall, this model solves the task in a similar way as participants do, including generally successful initial learning as well as reversal learning after the change of feedback contingencies. Moreover, the fact that not all participants were successful in the two learning phases is also reflected in the modeling data. However, we found a larger variance and a lower overall performance of the modeling data as compared to the human data which may relate to perceptual preferences or additional knowledge and rules applied by the participants. In a next step, these aspects could be implemented in the model for a better overall fit. In view of the large interindividual differences in decision performance between participants, additional information about the underlying cognitive processes from behavioral, psychobiological and neurophysiological data may help to optimize future applications of this model such that it can be transferred to other domains of comparable dynamic decision tasks.DFG, 54371073, SFB/TRR 62: Eine Companion-Technologie für kognitive technische System
A Network Model for Adaptive Information Retrieval
This thesis presents a network model which can be used to represent Associative Information Retrieval applications at a conceptual level. The model presents interesting characteristics of adaptability and it has been used to model both traditional and knowledge based Information Retrieval applications. Moreover, three different processing frameworks which can be used to implement the conceptual model are presented. They provide three different ways of using domain knowledge to adapt the user formulated query to the characteristics of a specific application domain using the domain knowledge stored in a sub-network. The advantages and drawbacks of these three adaptive retrieval strategies are pointed out and discussed. The thesis also reports the results of an experimental investigation into the effectiveness of the adaptive retrieval given by a processing framework based on Neural Networks. This processing framework makes use of the learning and generalisation capabilities of the Backpropagation learning procedure for Neural Networks to build up and use application domain knowledge in the form of a sub-symbolic knowledge representation. The knowledge is acquired from examples of queries and relevant documents of the collection in use. In the tests reported in this thesis the Cranfield document collection has been used. Three different learning strategies are introduced and analysed. Their results in terms of learning and generalisation of the application domain knowledge are studied from an Information Retrieval point of view. Their retrieval results are studied and compared with those obtained by a traditional retrieval approach. The thesis concludes with a critical analysis of the results obtained in the experimental investigation and with a critical view of the operational effectiveness of such an approach
Computational Modeling of Emotion: Towards Improving the Inter- and Intradisciplinary Exchange
International audienceThe past years have seen increasing cooperation between psychology and computer science in the field of computational modeling of emotion. However, to realize its potential, the exchange between the two disciplines, as well as the intradisciplinary coordination, should be further improved. We make three proposals for how this could be achieved. The proposals refer to: 1) systematizing and classifying the assumptions of psychological emotion theories; 2) formalizing emotion theories in implementation-independent formal languages (set theory, agent logics); and 3) modeling emotions using general cognitive architectures (such as Soar and ACT-R), general agent architectures (such as the BDI architecture) or general-purpose affective agent architectures. These proposals share two overarching themes. The first is a proposal for modularization: deconstruct emotion theories into basic assumptions; modularize architectures. The second is a proposal for unification and standardization: Translate different emotion theories into a common informal conceptual system or a formal language, or implement them in a common architecture
- …