43,998 research outputs found
Learning how to learn: an adaptive dialogue agent for incrementally learning visually grounded word meanings
We present an optimised multi-modal dialogue agent for interactive learning
of visually grounded word meanings from a human tutor, trained on real
human-human tutoring data. Within a life-long interactive learning period, the
agent, trained using Reinforcement Learning (RL), must be able to handle
natural conversations with human users and achieve good learning performance
(accuracy) while minimising human effort in the learning process. We train and
evaluate this system in interaction with a simulated human tutor, which is
built on the BURCHAK corpus -- a Human-Human Dialogue dataset for the visual
learning task. The results show that: 1) The learned policy can coherently
interact with the simulated user to achieve the goal of the task (i.e. learning
visual attributes of objects, e.g. colour and shape); and 2) it finds a better
trade-off between classifier accuracy and tutoring costs than hand-crafted
rule-based policies, including ones with dynamic policies.Comment: 10 pages, RoboNLP Workshop from ACL Conferenc
The Role of Imagination in Social Scientific Discovery: Why Machine Discoverers Will Need Imagination Algorithms
When philosophers discuss the possibility of machines making scientific discoveries, they typically focus on discoveries in physics, biology, chemistry and mathematics. Observing the rapid increase of computer-use in science, however, it becomes natural to ask whether there are any scientific domains out of reach for machine discovery. For example, could machines also make discoveries in qualitative social science? Is there something about humans that makes us uniquely suited to studying humans? Is there something about machines that would bar them from such activity? A close look at the methodology of interpretive social science reveals several abilities necessary to make a social scientific discovery, and one capacity necessary to possess any of them is imagination. For machines to make discoveries in social science, therefore, they must possess imagination algorithms
Training an adaptive dialogue policy for interactive learning of visually grounded word meanings
We present a multi-modal dialogue system for interactive learning of
perceptually grounded word meanings from a human tutor. The system integrates
an incremental, semantic parsing/generation framework - Dynamic Syntax and Type
Theory with Records (DS-TTR) - with a set of visual classifiers that are
learned throughout the interaction and which ground the meaning representations
that it produces. We use this system in interaction with a simulated human
tutor to study the effects of different dialogue policies and capabilities on
the accuracy of learned meanings, learning rates, and efforts/costs to the
tutor. We show that the overall performance of the learning agent is affected
by (1) who takes initiative in the dialogues; (2) the ability to express/use
their confidence level about visual attributes; and (3) the ability to process
elliptical and incrementally constructed dialogue turns. Ultimately, we train
an adaptive dialogue policy which optimises the trade-off between classifier
accuracy and tutoring costs.Comment: 11 pages, SIGDIAL 2016 Conferenc
Recommended from our members
Language acquisition and machine learning
In this paper, we review recent progress in the field of machine learning and examine its implications for computational models of language acquisition. As a framework for understanding this research, we propose four component tasks involved in learning from experience - aggregation, clustering, characterization, and storage. We then consider four common problems studied by machine learning researchers - learning from examples, heuristics learning, conceptual clustering, and learning macro-operators - describing each in terms of our framework. After this, we turn to the problem of grammar acquisition, relating this problem to other learning tasks and reviewing four AI systems that have addressed the problem. Finally, we note some limitations of the earlier work and propose an alternative approach to modeling the mechanisms underlying language acquisition
Recommended from our members
Machine learning : techniques and foundations
The field of machine learning studies computational methods for acquiring new knowledge, new skills, and new ways to organize existing knowledge. In this paper we present some of the basic techniques and principles that underlie AI research on learning, including methods for learning from examples, learning in problem solving, learning by analogy, grammar acquisition, and machine discovery. In each case, we illustrate the techniques with paradigmatic examples
Cognitive Computation sans Representation
The Computational Theory of Mind (CTM) holds that cognitive processes are essentially computational, and hence computation provides the scientific key to explaining mentality. The Representational Theory of Mind (RTM) holds that representational content is the key feature in distinguishing mental from non-mental systems. I argue that there is a deep incompatibility between these two theoretical frameworks, and that the acceptance of CTM provides strong grounds for rejecting RTM. The focal point of the incompatibility is the fact that representational content is extrinsic to formal procedures as such, and the intended interpretation of syntax makes no difference to the execution of an algorithm. So the unique 'content' postulated by RTM is superfluous to the formal procedures of CTM. And once these procedures are implemented in a physical mechanism, it is exclusively the causal properties of the physical mechanism that are responsible for all aspects of the system's behaviour. So once again, postulated content is rendered superfluous. To the extent that semantic content may appear to play a role in behaviour, it must be syntactically encoded within the system, and just as in a standard computational artefact, so too with the human mind/brain - it's pure syntax all the way down to the level of physical implementation. Hence 'content' is at most a convenient meta-level gloss, projected from the outside by human theorists, which itself can play no role in cognitive processing
Natural Language Does Not Emerge 'Naturally' in Multi-Agent Dialog
A number of recent works have proposed techniques for end-to-end learning of
communication protocols among cooperative multi-agent populations, and have
simultaneously found the emergence of grounded human-interpretable language in
the protocols developed by the agents, all learned without any human
supervision!
In this paper, using a Task and Tell reference game between two agents as a
testbed, we present a sequence of 'negative' results culminating in a
'positive' one -- showing that while most agent-invented languages are
effective (i.e. achieve near-perfect task rewards), they are decidedly not
interpretable or compositional.
In essence, we find that natural language does not emerge 'naturally',
despite the semblance of ease of natural-language-emergence that one may gather
from recent literature. We discuss how it is possible to coax the invented
languages to become more and more human-like and compositional by increasing
restrictions on how two agents may communicate.Comment: 9 pages, 7 figures, 2 tables, accepted at EMNLP 2017 as short pape
- …