633,351 research outputs found
Prediction of intent in robotics and multi-agent systems.
Moving beyond the stimulus contained in observable agent behaviour, i.e. understanding the underlying intent of the observed agent is of immense interest in a variety of domains that involve collaborative and competitive scenarios, for example assistive robotics, computer games, robot-human interaction, decision support and intelligent tutoring. This review paper examines approaches for performing action recognition and prediction of intent from a multi-disciplinary perspective, in both single robot and multi-agent scenarios, and analyses the underlying challenges, focusing mainly on generative approaches
Recommended from our members
A Comparative Analysis of Speech Recognition Platforms
Speech recognition (also known as automatic speech recognition) converts spoken words to text. It is a broad term which means it can recognize almost any speech – such as in a call centre system designed to recognize many voices. Speech Recognition in the field of telephony commonplace; and in the field of computer gaming and simulation, is becoming widespread. People with disabilities are another part of the population that benefit from using speech recognition programs. It is becoming increasingly certain, that the interaction between humans and speech recognition engines is on the increase. In certain circumstances, the caller is directed with a series of options. This is called a Directed Dialog interaction. On the other hand, there are situations where the caller is not limited by pre-defined options; but rather given the opportunity to indicate their intent. This scenario is known as an Open Dialog interaction where the caller indicates their intent orally, and the speech platform is expected to correctly interpret the caller’s intent. Such interpretations are prone to variation in recognition and classification. Even if the application software correctly classifies the caller intent, it may not adequately capture the actual utterance. This paper proposes statistical techniques for measuring the performance of three Speech Recognition engines in a directed-dialog scenario
Spoken Language Intent Detection using Confusion2Vec
Decoding speaker's intent is a crucial part of spoken language understanding
(SLU). The presence of noise or errors in the text transcriptions, in real life
scenarios make the task more challenging. In this paper, we address the spoken
language intent detection under noisy conditions imposed by automatic speech
recognition (ASR) systems. We propose to employ confusion2vec word feature
representation to compensate for the errors made by ASR and to increase the
robustness of the SLU system. The confusion2vec, motivated from human speech
production and perception, models acoustic relationships between words in
addition to the semantic and syntactic relations of words in human language. We
hypothesize that ASR often makes errors relating to acoustically similar words,
and the confusion2vec with inherent model of acoustic relationships between
words is able to compensate for the errors. We demonstrate through experiments
on the ATIS benchmark dataset, the robustness of the proposed model to achieve
state-of-the-art results under noisy ASR conditions. Our system reduces
classification error rate (CER) by 20.84% and improves robustness by 37.48%
(lower CER degradation) relative to the previous state-of-the-art going from
clean to noisy transcripts. Improvements are also demonstrated when training
the intent detection models on noisy transcripts
Semantic Sentence Similarity for Intent Recognition Task
Modul pro rozpoznání úmyslu je základní součástí jakéhokoliv question-answering bota (např. Amazon Echo). Tato práce implementuje modul pro rozpoznání úmyslu, založený na větných předlohách, který je silně závislý na efektivitě text embedding algoritmů. Tato práce proto poskytuje komplexní přehled nynějších word a sentence embedding algoritmů. Dále provádí unikátní porovnání těchto algoritmů, týkající se jejich trénovacích schopností, výkonu a hardwarových nároků. Tato práce dále implementuje dvě metody komprese embedding modelů (promazávání slovníku a vektorovou kvantizaci) za účelem jejich použití v mobilních aplikacích. Embedding algoritmus StarSpace dosáhl v experimentech nejlepších výsledků. Zkoumané metody pro kompresi modelů se ukázaly být velmi výkonné, přičemž dokázaly zmenšit velikost modelů 100-1000 krát bez viditelného zhoršení výsledků. Komprimovaný StarSpace model byl proto využit pro výsledný modul pro rozpoznání úmyslu, který byl schopen překonat systém používaný v Alquist social botovi (druhé místo v Alexa prize soutěži, 2017), přičemž byl méně komplexní.An intent recognition module is a core component of any question-answering bot (e.g. Amazon Echo). This thesis implements a template-based intent recognition system, which heavily relies on the performance of text embedding algorithms. The thesis therefore provides a comprehensive overview of the state-of-the-art word and sentence embedding algorithms. Further, it performs a unique comparison of the algorithms in terms of their training properties, performance, and hardware requirements. This work further implements two model compression techniques (vocabulary pruning and vector quantization) to make the models more suitable for mobile applications. The StarSpace embedding algorithm performed the best in the experiments. Further, the compression methods proved to be very powerful, being able to reduce the size of the models 100-1000 times without any notable loss of performance. Thus, a compressed StarSpace model was used to create the resulting intent recognition module that was able to outperform the currently used system in the Alquist social bot (second place in the 2017 Alexa prize contest) while being less complex
- …
