15,934 research outputs found
Challenging Neural Dialogue Models with Natural Data: Memory Networks Fail on Incremental Phenomena
Natural, spontaneous dialogue proceeds incrementally on a word-by-word basis;
and it contains many sorts of disfluency such as mid-utterance/sentence
hesitations, interruptions, and self-corrections. But training data for machine
learning approaches to dialogue processing is often either cleaned-up or wholly
synthetic in order to avoid such phenomena. The question then arises of how
well systems trained on such clean data generalise to real spontaneous
dialogue, or indeed whether they are trainable at all on naturally occurring
dialogue data. To answer this question, we created a new corpus called bAbI+ by
systematically adding natural spontaneous incremental dialogue phenomena such
as restarts and self-corrections to the Facebook AI Research's bAbI dialogues
dataset. We then explore the performance of a state-of-the-art retrieval model,
MemN2N, on this more natural dataset. Results show that the semantic accuracy
of the MemN2N model drops drastically; and that although it is in principle
able to learn to process the constructions in bAbI+, it needs an impractical
amount of training data to do so. Finally, we go on to show that an
incremental, semantic parser -- DyLan -- shows 100% semantic accuracy on both
bAbI and bAbI+, highlighting the generalisation properties of linguistically
informed dialogue models.Comment: 9 pages, 3 figures, 2 tables. Accepted as a full paper for SemDial
201
Logic, self-awareness and self-improvement: The metacognitive loop and the problem of brittleness
This essay describes a general approach to building perturbation-tolerant autonomous systems, based on the conviction that artificial agents should be able notice when something is amiss, assess the anomaly, and guide a solution into place. We call this basic strategy of self-guided learning the metacognitive loop; it involves the system monitoring, reasoning about, and, when necessary, altering its own decision-making components. In this essay, we (a) argue that equipping agents with a metacognitive loop can help to overcome the brittleness problem, (b) detail the metacognitive loop and its relation to our ongoing work on time-sensitive commonsense reasoning, (c) describe specific, implemented systems whose perturbation tolerance was improved by adding a metacognitive loop, and (d) outline both short-term and long-term research agendas
A Formal Model of Metaphor in Frame Semantics
A formal model of metaphor is introduced. It models metaphor, first, as an interaction of “frames” according to the frame semantics, and then, as a wave function in Hilbert space. The practical way for a probability distribution and a corresponding wave function to be assigned to a given
metaphor in a given language is considered. A series of formal definitions is deduced from this for: “representation”, “reality”, “language”, “ontology”, etc. All are based on Hilbert space. A few statements about a quantum computer are implied: The sodefined reality is inherent and internal to it. It can report a result only “metaphorically”. It will demolish transmitting the result “literally”, i.e. absolutely exactly. A new and different formal
definition of metaphor is introduced as a few entangled wave functions corresponding to different “signs” in different language formally defined as above. The change of frames as the change from the one to the other formal definition of metaphor is interpreted as a formal definition of thought. Four areas of cognition are unified as different but isomorphic interpretations of the mathematical model based on Hilbert space. These are: quantum mechanics, frame semantics, formal semantics by
means of quantum computer, and the theory of metaphor in
linguistics
- …